Test Report: Docker_Linux_crio 21738

                    
                      0f64f31b8846d8060cae128a3e5be9cc35c08ea3:2025-10-16:41932
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.95
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 148.29
38 TestAddons/parallel/InspektorGadget 5.24
39 TestAddons/parallel/MetricsServer 5.33
41 TestAddons/parallel/CSI 47.49
42 TestAddons/parallel/Headlamp 2.53
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 8.28
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 5.24
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
98 TestFunctional/parallel/ServiceCmdConnect 603
117 TestFunctional/parallel/ImageCommands/ImageListShort 2.25
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.64
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.56
191 TestJSONOutput/pause/Command 2.23
197 TestJSONOutput/unpause/Command 1.71
270 TestPause/serial/Pause 6.19
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.3
310 TestStartStop/group/old-k8s-version/serial/Pause 5.65
319 TestStartStop/group/no-preload/serial/Pause 5.93
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.37
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.47
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.15
337 TestStartStop/group/newest-cni/serial/Pause 5.92
344 TestStartStop/group/embed-certs/serial/Pause 7.75
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.65
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable volcano --alsologtostderr -v=1: exit status 11 (247.922861ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:45:52.715519   21733 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:45:52.716019   21733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:45:52.716031   21733 out.go:374] Setting ErrFile to fd 2...
	I1016 17:45:52.716035   21733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:45:52.716258   21733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:45:52.716505   21733 mustload.go:65] Loading cluster: addons-431183
	I1016 17:45:52.716860   21733 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:45:52.716875   21733 addons.go:606] checking whether the cluster is paused
	I1016 17:45:52.716950   21733 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:45:52.716960   21733 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:45:52.717313   21733 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:45:52.736547   21733 ssh_runner.go:195] Run: systemctl --version
	I1016 17:45:52.736596   21733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:45:52.754492   21733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:45:52.852028   21733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:45:52.852118   21733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:45:52.886805   21733 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:45:52.886833   21733 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:45:52.886837   21733 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:45:52.886841   21733 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:45:52.886843   21733 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:45:52.886850   21733 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:45:52.886853   21733 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:45:52.886855   21733 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:45:52.886858   21733 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:45:52.886869   21733 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:45:52.886872   21733 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:45:52.886875   21733 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:45:52.886878   21733 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:45:52.886880   21733 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:45:52.886883   21733 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:45:52.886893   21733 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:45:52.886900   21733 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:45:52.886904   21733 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:45:52.886907   21733 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:45:52.886909   21733 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:45:52.886911   21733 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:45:52.886914   21733 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:45:52.886916   21733 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:45:52.886919   21733 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:45:52.886921   21733 cri.go:89] found id: ""
	I1016 17:45:52.886967   21733 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:45:52.902118   21733 out.go:203] 
	W1016 17:45:52.903785   21733 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:45:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:45:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:45:52.903807   21733 out.go:285] * 
	* 
	W1016 17:45:52.906851   21733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:45:52.908406   21733 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.36147ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002332894s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00365441s
addons_test.go:392: (dbg) Run:  kubectl --context addons-431183 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-431183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-431183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.495514713s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 ip
2025/10/16 17:46:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable registry --alsologtostderr -v=1: exit status 11 (236.444053ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:15.488658   24415 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:15.488963   24415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:15.488973   24415 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:15.488977   24415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:15.489201   24415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:15.489446   24415 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:15.489819   24415 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:15.489841   24415 addons.go:606] checking whether the cluster is paused
	I1016 17:46:15.489920   24415 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:15.489933   24415 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:15.490284   24415 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:15.508798   24415 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:15.508860   24415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:15.526727   24415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:15.625245   24415 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:15.625327   24415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:15.656178   24415 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:15.656212   24415 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:15.656216   24415 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:15.656219   24415 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:15.656222   24415 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:15.656229   24415 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:15.656231   24415 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:15.656234   24415 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:15.656237   24415 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:15.656245   24415 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:15.656248   24415 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:15.656250   24415 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:15.656253   24415 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:15.656255   24415 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:15.656258   24415 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:15.656264   24415 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:15.656267   24415 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:15.656270   24415 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:15.656272   24415 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:15.656275   24415 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:15.656277   24415 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:15.656279   24415 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:15.656293   24415 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:15.656296   24415 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:15.656298   24415 cri.go:89] found id: ""
	I1016 17:46:15.656345   24415 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:15.671988   24415 out.go:203] 
	W1016 17:46:15.673379   24415 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:15.673401   24415 out.go:285] * 
	* 
	W1016 17:46:15.676919   24415 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:15.679364   24415 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.180805ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-431183
addons_test.go:332: (dbg) Run:  kubectl --context addons-431183 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (244.78238ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:20.713140   24961 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:20.713427   24961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.713438   24961 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:20.713444   24961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.713685   24961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:20.713977   24961 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:20.714333   24961 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.714351   24961 addons.go:606] checking whether the cluster is paused
	I1016 17:46:20.714454   24961 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.714470   24961 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:20.714862   24961 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:20.735737   24961 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:20.735815   24961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:20.756010   24961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:20.853359   24961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:20.853444   24961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:20.886414   24961 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:20.886436   24961 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:20.886442   24961 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:20.886447   24961 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:20.886452   24961 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:20.886457   24961 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:20.886462   24961 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:20.886466   24961 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:20.886470   24961 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:20.886477   24961 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:20.886482   24961 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:20.886486   24961 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:20.886490   24961 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:20.886495   24961 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:20.886499   24961 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:20.886512   24961 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:20.886517   24961 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:20.886523   24961 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:20.886527   24961 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:20.886531   24961 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:20.886535   24961 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:20.886540   24961 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:20.886544   24961 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:20.886548   24961 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:20.886552   24961 cri.go:89] found id: ""
	I1016 17:46:20.886594   24961 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:20.902182   24961 out.go:203] 
	W1016 17:46:20.903385   24961 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:20.903406   24961 out.go:285] * 
	* 
	W1016 17:46:20.907299   24961 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:20.908595   24961 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-431183 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-431183 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-431183 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5edeea58-2186-4fb5-aa1d-cf7195cf5bea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5edeea58-2186-4fb5-aa1d-cf7195cf5bea] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003191551s
I1016 17:46:22.780666   12375 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.715688271s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-431183 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-431183
helpers_test.go:243: (dbg) docker inspect addons-431183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06",
	        "Created": "2025-10-16T17:44:06.387675641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T17:44:06.423998189Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/hostname",
	        "HostsPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/hosts",
	        "LogPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06-json.log",
	        "Name": "/addons-431183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-431183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-431183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06",
	                "LowerDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-431183",
	                "Source": "/var/lib/docker/volumes/addons-431183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-431183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-431183",
	                "name.minikube.sigs.k8s.io": "addons-431183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f8da9d1ff89a6310d47d61860ef73957be2385ff63316af8ca19c0f0c40b565",
	            "SandboxKey": "/var/run/docker/netns/8f8da9d1ff89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-431183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:08:bb:cf:90:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6e08f5c6d684788b8c0cacd5c9a403d01405022a8e87923ecb8c1b8d83c9dfa7",
	                    "EndpointID": "60a2e38c312dc4d2d88c9bbcd02814052f9c1ca403726fc3d39d5bef4a98fa9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-431183",
	                        "895cc9c3f830"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-431183 -n addons-431183
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-431183 logs -n 25: (1.153880372s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-905459 --alsologtostderr --binary-mirror http://127.0.0.1:34337 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-905459 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ -p binary-mirror-905459                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-905459 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ addons  │ enable dashboard -p addons-431183                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ addons  │ disable dashboard -p addons-431183                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ start   │ -p addons-431183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:45 UTC │
	│ addons  │ addons-431183 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:45 UTC │                     │
	│ addons  │ addons-431183 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ enable headlamp -p addons-431183 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ ssh     │ addons-431183 ssh cat /opt/local-path-provisioner/pvc-b51ae802-df03-41ae-8349-d78df8b133fd_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │ 16 Oct 25 17:46 UTC │
	│ addons  │ addons-431183 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ ip      │ addons-431183 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │ 16 Oct 25 17:46 UTC │
	│ addons  │ addons-431183 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-431183                                                                                                                                                                                                                                                                                                                                                                                           │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │ 16 Oct 25 17:46 UTC │
	│ addons  │ addons-431183 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ ssh     │ addons-431183 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ addons-431183 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ ip      │ addons-431183 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-431183        │ jenkins │ v1.37.0 │ 16 Oct 25 17:48 UTC │ 16 Oct 25 17:48 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:43:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:43:41.770608   13712 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:43:41.770731   13712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:41.770739   13712 out.go:374] Setting ErrFile to fd 2...
	I1016 17:43:41.770746   13712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:41.770947   13712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:43:41.771506   13712 out.go:368] Setting JSON to false
	I1016 17:43:41.772328   13712 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1570,"bootTime":1760635052,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:43:41.772415   13712 start.go:141] virtualization: kvm guest
	I1016 17:43:41.774284   13712 out.go:179] * [addons-431183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:43:41.775903   13712 notify.go:220] Checking for updates...
	I1016 17:43:41.775933   13712 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:43:41.777672   13712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:43:41.779109   13712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:43:41.780667   13712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:43:41.782237   13712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:43:41.783735   13712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:43:41.785258   13712 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:43:41.808727   13712 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:43:41.808805   13712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:41.867519   13712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-16 17:43:41.858599982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:41.867626   13712 docker.go:318] overlay module found
	I1016 17:43:41.869454   13712 out.go:179] * Using the docker driver based on user configuration
	I1016 17:43:41.870828   13712 start.go:305] selected driver: docker
	I1016 17:43:41.870843   13712 start.go:925] validating driver "docker" against <nil>
	I1016 17:43:41.870854   13712 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:43:41.871372   13712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:41.926126   13712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-16 17:43:41.915354408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:41.926325   13712 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:43:41.926621   13712 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:43:41.928923   13712 out.go:179] * Using Docker driver with root privileges
	I1016 17:43:41.930221   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:43:41.930287   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:43:41.930304   13712 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 17:43:41.930379   13712 start.go:349] cluster config:
	{Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1016 17:43:41.931782   13712 out.go:179] * Starting "addons-431183" primary control-plane node in "addons-431183" cluster
	I1016 17:43:41.933210   13712 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 17:43:41.934674   13712 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 17:43:41.935945   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:43:41.935988   13712 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 17:43:41.936001   13712 cache.go:58] Caching tarball of preloaded images
	I1016 17:43:41.936094   13712 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 17:43:41.936101   13712 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 17:43:41.936108   13712 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 17:43:41.936411   13712 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json ...
	I1016 17:43:41.936440   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json: {Name:mk2eceda1a8c022755b511272da50341dbc13339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:43:41.952824   13712 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 17:43:41.952965   13712 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 17:43:41.952989   13712 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1016 17:43:41.952996   13712 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1016 17:43:41.953006   13712 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1016 17:43:41.953014   13712 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1016 17:43:54.602209   13712 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1016 17:43:54.602242   13712 cache.go:232] Successfully downloaded all kic artifacts
	I1016 17:43:54.602280   13712 start.go:360] acquireMachinesLock for addons-431183: {Name:mkc268cc7edc28cd51d10e7128f020d2864cbc75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:43:54.602387   13712 start.go:364] duration metric: took 87.699µs to acquireMachinesLock for "addons-431183"
	I1016 17:43:54.602410   13712 start.go:93] Provisioning new machine with config: &{Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:43:54.602477   13712 start.go:125] createHost starting for "" (driver="docker")
	I1016 17:43:54.605014   13712 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1016 17:43:54.605215   13712 start.go:159] libmachine.API.Create for "addons-431183" (driver="docker")
	I1016 17:43:54.605247   13712 client.go:168] LocalClient.Create starting
	I1016 17:43:54.605353   13712 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 17:43:55.144917   13712 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 17:43:55.343026   13712 cli_runner.go:164] Run: docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 17:43:55.362048   13712 cli_runner.go:211] docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 17:43:55.362153   13712 network_create.go:284] running [docker network inspect addons-431183] to gather additional debugging logs...
	I1016 17:43:55.362174   13712 cli_runner.go:164] Run: docker network inspect addons-431183
	W1016 17:43:55.378481   13712 cli_runner.go:211] docker network inspect addons-431183 returned with exit code 1
	I1016 17:43:55.378505   13712 network_create.go:287] error running [docker network inspect addons-431183]: docker network inspect addons-431183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-431183 not found
	I1016 17:43:55.378520   13712 network_create.go:289] output of [docker network inspect addons-431183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-431183 not found
	
	** /stderr **
	I1016 17:43:55.378617   13712 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 17:43:55.396224   13712 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001508700}
	I1016 17:43:55.396271   13712 network_create.go:124] attempt to create docker network addons-431183 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1016 17:43:55.396314   13712 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-431183 addons-431183
	I1016 17:43:55.453270   13712 network_create.go:108] docker network addons-431183 192.168.49.0/24 created
	I1016 17:43:55.453300   13712 kic.go:121] calculated static IP "192.168.49.2" for the "addons-431183" container
	I1016 17:43:55.453355   13712 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 17:43:55.470121   13712 cli_runner.go:164] Run: docker volume create addons-431183 --label name.minikube.sigs.k8s.io=addons-431183 --label created_by.minikube.sigs.k8s.io=true
	I1016 17:43:55.489442   13712 oci.go:103] Successfully created a docker volume addons-431183
	I1016 17:43:55.489529   13712 cli_runner.go:164] Run: docker run --rm --name addons-431183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --entrypoint /usr/bin/test -v addons-431183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 17:44:01.882338   13712 cli_runner.go:217] Completed: docker run --rm --name addons-431183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --entrypoint /usr/bin/test -v addons-431183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (6.392770221s)
	I1016 17:44:01.882373   13712 oci.go:107] Successfully prepared a docker volume addons-431183
	I1016 17:44:01.882392   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:01.882411   13712 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 17:44:01.882467   13712 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-431183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 17:44:06.310204   13712 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-431183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427693666s)
	I1016 17:44:06.310238   13712 kic.go:203] duration metric: took 4.427823614s to extract preloaded images to volume ...
	W1016 17:44:06.310336   13712 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 17:44:06.310369   13712 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 17:44:06.310404   13712 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 17:44:06.370383   13712 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-431183 --name addons-431183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-431183 --network addons-431183 --ip 192.168.49.2 --volume addons-431183:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 17:44:06.684259   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Running}}
	I1016 17:44:06.703670   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:06.723556   13712 cli_runner.go:164] Run: docker exec addons-431183 stat /var/lib/dpkg/alternatives/iptables
	I1016 17:44:06.773748   13712 oci.go:144] the created container "addons-431183" has a running status.
	I1016 17:44:06.773779   13712 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa...
	I1016 17:44:07.012015   13712 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 17:44:07.043089   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:07.064648   13712 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 17:44:07.064666   13712 kic_runner.go:114] Args: [docker exec --privileged addons-431183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 17:44:07.116507   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:07.135217   13712 machine.go:93] provisionDockerMachine start ...
	I1016 17:44:07.135324   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.154708   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.155094   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.155116   13712 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 17:44:07.293900   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-431183
	
	I1016 17:44:07.293925   13712 ubuntu.go:182] provisioning hostname "addons-431183"
	I1016 17:44:07.294016   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.313273   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.313509   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.313526   13712 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-431183 && echo "addons-431183" | sudo tee /etc/hostname
	I1016 17:44:07.460181   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-431183
	
	I1016 17:44:07.460245   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.477880   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.478102   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.478119   13712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-431183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-431183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-431183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 17:44:07.614372   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 17:44:07.614397   13712 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 17:44:07.614429   13712 ubuntu.go:190] setting up certificates
	I1016 17:44:07.614442   13712 provision.go:84] configureAuth start
	I1016 17:44:07.614494   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:07.631800   13712 provision.go:143] copyHostCerts
	I1016 17:44:07.631874   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 17:44:07.631978   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 17:44:07.632040   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 17:44:07.632092   13712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.addons-431183 san=[127.0.0.1 192.168.49.2 addons-431183 localhost minikube]
	I1016 17:44:07.801457   13712 provision.go:177] copyRemoteCerts
	I1016 17:44:07.801514   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 17:44:07.801547   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.820035   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:07.916910   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 17:44:07.936393   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 17:44:07.953787   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 17:44:07.971644   13712 provision.go:87] duration metric: took 357.188788ms to configureAuth
	I1016 17:44:07.971674   13712 ubuntu.go:206] setting minikube options for container-runtime
	I1016 17:44:07.971894   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:07.972120   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.989408   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.989645   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.989672   13712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 17:44:08.234732   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 17:44:08.234756   13712 machine.go:96] duration metric: took 1.099516456s to provisionDockerMachine
	I1016 17:44:08.234772   13712 client.go:171] duration metric: took 13.629513967s to LocalClient.Create
	I1016 17:44:08.234794   13712 start.go:167] duration metric: took 13.629578272s to libmachine.API.Create "addons-431183"
	I1016 17:44:08.234806   13712 start.go:293] postStartSetup for "addons-431183" (driver="docker")
	I1016 17:44:08.234819   13712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 17:44:08.234877   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 17:44:08.234910   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.252434   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.351659   13712 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 17:44:08.355376   13712 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 17:44:08.355406   13712 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 17:44:08.355423   13712 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 17:44:08.355480   13712 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 17:44:08.355510   13712 start.go:296] duration metric: took 120.696481ms for postStartSetup
	I1016 17:44:08.355846   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:08.373580   13712 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json ...
	I1016 17:44:08.373905   13712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 17:44:08.373956   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.392861   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.486913   13712 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 17:44:08.491333   13712 start.go:128] duration metric: took 13.888844233s to createHost
	I1016 17:44:08.491353   13712 start.go:83] releasing machines lock for "addons-431183", held for 13.888955087s
	I1016 17:44:08.491424   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:08.508740   13712 ssh_runner.go:195] Run: cat /version.json
	I1016 17:44:08.508788   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.508795   13712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 17:44:08.508868   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.527887   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.528689   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.675391   13712 ssh_runner.go:195] Run: systemctl --version
	I1016 17:44:08.681574   13712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 17:44:08.715477   13712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 17:44:08.719994   13712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 17:44:08.720059   13712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 17:44:08.746341   13712 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 17:44:08.746381   13712 start.go:495] detecting cgroup driver to use...
	I1016 17:44:08.746419   13712 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 17:44:08.746461   13712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 17:44:08.762274   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 17:44:08.774871   13712 docker.go:218] disabling cri-docker service (if available) ...
	I1016 17:44:08.774935   13712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 17:44:08.791306   13712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 17:44:08.808595   13712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 17:44:08.887688   13712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 17:44:08.975071   13712 docker.go:234] disabling docker service ...
	I1016 17:44:08.975127   13712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 17:44:08.992698   13712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 17:44:09.005339   13712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 17:44:09.090684   13712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 17:44:09.171316   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 17:44:09.183781   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 17:44:09.197470   13712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 17:44:09.197611   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.208219   13712 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 17:44:09.208282   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.217455   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.226138   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.234625   13712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 17:44:09.242432   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.251207   13712 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.264728   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.273857   13712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 17:44:09.281301   13712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1016 17:44:09.281372   13712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1016 17:44:09.293924   13712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 17:44:09.301896   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:09.379492   13712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 17:44:09.482009   13712 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 17:44:09.482085   13712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 17:44:09.486056   13712 start.go:563] Will wait 60s for crictl version
	I1016 17:44:09.486108   13712 ssh_runner.go:195] Run: which crictl
	I1016 17:44:09.490123   13712 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 17:44:09.513323   13712 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 17:44:09.513438   13712 ssh_runner.go:195] Run: crio --version
	I1016 17:44:09.540804   13712 ssh_runner.go:195] Run: crio --version
	I1016 17:44:09.570302   13712 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 17:44:09.571458   13712 cli_runner.go:164] Run: docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 17:44:09.588604   13712 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 17:44:09.592540   13712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:09.602431   13712 kubeadm.go:883] updating cluster {Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 17:44:09.602533   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:09.602571   13712 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:09.633269   13712 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 17:44:09.633289   13712 crio.go:433] Images already preloaded, skipping extraction
	I1016 17:44:09.633333   13712 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:09.659117   13712 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 17:44:09.659136   13712 cache_images.go:85] Images are preloaded, skipping loading
	I1016 17:44:09.659143   13712 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 17:44:09.659226   13712 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-431183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 17:44:09.659297   13712 ssh_runner.go:195] Run: crio config
	I1016 17:44:09.702992   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:44:09.703028   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:44:09.703050   13712 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 17:44:09.703081   13712 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-431183 NodeName:addons-431183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 17:44:09.703225   13712 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-431183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 17:44:09.703296   13712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 17:44:09.711750   13712 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 17:44:09.711814   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 17:44:09.719629   13712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 17:44:09.733530   13712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 17:44:09.749527   13712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1016 17:44:09.762642   13712 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1016 17:44:09.766554   13712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:09.776751   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:09.858146   13712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:44:09.882405   13712 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183 for IP: 192.168.49.2
	I1016 17:44:09.882430   13712 certs.go:195] generating shared ca certs ...
	I1016 17:44:09.882483   13712 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:09.882606   13712 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 17:44:10.050177   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt ...
	I1016 17:44:10.050205   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt: {Name:mk92ca197d451ca11c78b9aaeedc706e4d79a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.050374   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key ...
	I1016 17:44:10.050387   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key: {Name:mkfcc2d9255fa5ee2fe177136fa6ab557b1c90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.050459   13712 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 17:44:10.302775   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt ...
	I1016 17:44:10.302800   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt: {Name:mk9315cbbc6404c054735a0ebde220e418cbb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.302949   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key ...
	I1016 17:44:10.302959   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key: {Name:mk07d83703f861966f1139378a1238cb3c83e885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.303038   13712 certs.go:257] generating profile certs ...
	I1016 17:44:10.303090   13712 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key
	I1016 17:44:10.303103   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt with IP's: []
	I1016 17:44:10.788204   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt ...
	I1016 17:44:10.788230   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: {Name:mk77b1769a1b00a9f7b022011c484dd24ac8fc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.788418   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key ...
	I1016 17:44:10.788434   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key: {Name:mk86d2ac99c89452cff09866d24819974184a017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.788547   13712 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a
	I1016 17:44:10.788567   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1016 17:44:11.379699   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a ...
	I1016 17:44:11.379732   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a: {Name:mk83da1270ddee706b29dbd3e821b6dc7c5d1c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.379937   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a ...
	I1016 17:44:11.379954   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a: {Name:mk01881853497ce21b9ef171c80bc0ef9a544baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.380054   13712 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt
	I1016 17:44:11.380141   13712 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key
	I1016 17:44:11.380189   13712 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key
	I1016 17:44:11.380206   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt with IP's: []
	I1016 17:44:11.541247   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt ...
	I1016 17:44:11.541274   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt: {Name:mk82af69c2a54723d8ae2b40aeb6d923a717f681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.541458   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key ...
	I1016 17:44:11.541472   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key: {Name:mkbadf7274621992992f90af2262fab4e928caba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.541674   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 17:44:11.541709   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 17:44:11.541750   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 17:44:11.541774   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 17:44:11.542330   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 17:44:11.559982   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 17:44:11.577316   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 17:44:11.593993   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 17:44:11.611059   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 17:44:11.628095   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 17:44:11.645312   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 17:44:11.661881   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 17:44:11.678697   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 17:44:11.698627   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 17:44:11.711515   13712 ssh_runner.go:195] Run: openssl version
	I1016 17:44:11.717707   13712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 17:44:11.729023   13712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.732964   13712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.733010   13712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.766911   13712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 17:44:11.775857   13712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 17:44:11.779305   13712 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 17:44:11.779350   13712 kubeadm.go:400] StartCluster: {Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:44:11.779430   13712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:44:11.779469   13712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:44:11.807360   13712 cri.go:89] found id: ""
	I1016 17:44:11.807418   13712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 17:44:11.815412   13712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 17:44:11.823439   13712 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 17:44:11.823489   13712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 17:44:11.831528   13712 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 17:44:11.831549   13712 kubeadm.go:157] found existing configuration files:
	
	I1016 17:44:11.831590   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 17:44:11.839040   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 17:44:11.839095   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 17:44:11.846663   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 17:44:11.854013   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 17:44:11.854057   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 17:44:11.861132   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 17:44:11.868271   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 17:44:11.868325   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 17:44:11.875406   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 17:44:11.882610   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 17:44:11.882658   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 17:44:11.890747   13712 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 17:44:11.926102   13712 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 17:44:11.926219   13712 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 17:44:11.945708   13712 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 17:44:11.945815   13712 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 17:44:11.945869   13712 kubeadm.go:318] OS: Linux
	I1016 17:44:11.945945   13712 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 17:44:11.946004   13712 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 17:44:11.946069   13712 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 17:44:11.946124   13712 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 17:44:11.946163   13712 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 17:44:11.946207   13712 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 17:44:11.946246   13712 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 17:44:11.946284   13712 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 17:44:12.001137   13712 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 17:44:12.001304   13712 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 17:44:12.001454   13712 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 17:44:12.008198   13712 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 17:44:12.010211   13712 out.go:252]   - Generating certificates and keys ...
	I1016 17:44:12.010316   13712 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 17:44:12.010429   13712 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 17:44:12.067438   13712 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 17:44:12.225103   13712 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 17:44:12.315893   13712 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 17:44:12.422635   13712 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 17:44:12.519153   13712 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 17:44:12.519319   13712 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-431183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 17:44:12.688833   13712 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 17:44:12.689042   13712 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-431183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 17:44:12.793210   13712 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 17:44:12.997659   13712 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 17:44:13.489931   13712 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 17:44:13.490015   13712 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 17:44:13.682059   13712 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 17:44:13.781974   13712 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 17:44:13.836506   13712 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 17:44:14.030302   13712 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 17:44:14.269068   13712 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 17:44:14.269513   13712 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 17:44:14.274381   13712 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 17:44:14.275878   13712 out.go:252]   - Booting up control plane ...
	I1016 17:44:14.276012   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 17:44:14.276116   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 17:44:14.276794   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 17:44:14.290258   13712 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 17:44:14.290348   13712 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 17:44:14.297285   13712 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 17:44:14.297424   13712 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 17:44:14.297483   13712 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 17:44:14.398077   13712 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 17:44:14.398224   13712 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 17:44:14.899648   13712 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.723117ms
	I1016 17:44:14.903557   13712 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 17:44:14.903683   13712 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1016 17:44:14.903843   13712 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 17:44:14.903971   13712 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 17:44:16.276508   13712 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.372870402s
	I1016 17:44:17.239387   13712 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.335846107s
	I1016 17:44:18.905124   13712 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001486271s
	I1016 17:44:18.915901   13712 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 17:44:18.926842   13712 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 17:44:18.938158   13712 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 17:44:18.938422   13712 kubeadm.go:318] [mark-control-plane] Marking the node addons-431183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 17:44:18.945911   13712 kubeadm.go:318] [bootstrap-token] Using token: s8h074.a5lym059it9fzll8
	I1016 17:44:18.947599   13712 out.go:252]   - Configuring RBAC rules ...
	I1016 17:44:18.947781   13712 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 17:44:18.950603   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 17:44:18.955766   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 17:44:18.958328   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 17:44:18.961586   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 17:44:18.964156   13712 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 17:44:19.310856   13712 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 17:44:19.726616   13712 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 17:44:20.311237   13712 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 17:44:20.312037   13712 kubeadm.go:318] 
	I1016 17:44:20.312132   13712 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 17:44:20.312147   13712 kubeadm.go:318] 
	I1016 17:44:20.312211   13712 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 17:44:20.312242   13712 kubeadm.go:318] 
	I1016 17:44:20.312290   13712 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 17:44:20.312376   13712 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 17:44:20.312480   13712 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 17:44:20.312497   13712 kubeadm.go:318] 
	I1016 17:44:20.312580   13712 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 17:44:20.312589   13712 kubeadm.go:318] 
	I1016 17:44:20.312657   13712 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 17:44:20.312666   13712 kubeadm.go:318] 
	I1016 17:44:20.312757   13712 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 17:44:20.312859   13712 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 17:44:20.312952   13712 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 17:44:20.312961   13712 kubeadm.go:318] 
	I1016 17:44:20.313057   13712 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 17:44:20.313163   13712 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 17:44:20.313178   13712 kubeadm.go:318] 
	I1016 17:44:20.313289   13712 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s8h074.a5lym059it9fzll8 \
	I1016 17:44:20.313415   13712 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 17:44:20.313449   13712 kubeadm.go:318] 	--control-plane 
	I1016 17:44:20.313457   13712 kubeadm.go:318] 
	I1016 17:44:20.313562   13712 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 17:44:20.313570   13712 kubeadm.go:318] 
	I1016 17:44:20.313680   13712 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s8h074.a5lym059it9fzll8 \
	I1016 17:44:20.313834   13712 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 17:44:20.315959   13712 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 17:44:20.316124   13712 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 17:44:20.316162   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:44:20.316177   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:44:20.317927   13712 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 17:44:20.319604   13712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 17:44:20.323893   13712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 17:44:20.323912   13712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 17:44:20.337400   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 17:44:20.545748   13712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 17:44:20.545820   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:20.545832   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-431183 minikube.k8s.io/updated_at=2025_10_16T17_44_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=addons-431183 minikube.k8s.io/primary=true
	I1016 17:44:20.558072   13712 ops.go:34] apiserver oom_adj: -16
	I1016 17:44:20.634912   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:21.135411   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:21.635999   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:22.135021   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:22.635829   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:23.135264   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:23.635279   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:24.135278   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:24.635299   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:25.135855   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:25.202737   13712 kubeadm.go:1113] duration metric: took 4.656956624s to wait for elevateKubeSystemPrivileges
	I1016 17:44:25.202775   13712 kubeadm.go:402] duration metric: took 13.423428318s to StartCluster
	I1016 17:44:25.202793   13712 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:25.202893   13712 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:44:25.203356   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:25.203565   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 17:44:25.203581   13712 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:44:25.203667   13712 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1016 17:44:25.203796   13712 addons.go:69] Setting yakd=true in profile "addons-431183"
	I1016 17:44:25.203819   13712 addons.go:238] Setting addon yakd=true in "addons-431183"
	I1016 17:44:25.203819   13712 addons.go:69] Setting default-storageclass=true in profile "addons-431183"
	I1016 17:44:25.203845   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:25.203862   13712 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-431183"
	I1016 17:44:25.203878   13712 addons.go:69] Setting ingress=true in profile "addons-431183"
	I1016 17:44:25.203883   13712 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-431183"
	I1016 17:44:25.203850   13712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-431183"
	I1016 17:44:25.203899   13712 addons.go:69] Setting ingress-dns=true in profile "addons-431183"
	I1016 17:44:25.203909   13712 addons.go:238] Setting addon ingress-dns=true in "addons-431183"
	I1016 17:44:25.203918   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203931   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203931   13712 addons.go:69] Setting registry-creds=true in profile "addons-431183"
	I1016 17:44:25.203954   13712 addons.go:238] Setting addon registry-creds=true in "addons-431183"
	I1016 17:44:25.203990   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.204073   13712 addons.go:69] Setting metrics-server=true in profile "addons-431183"
	I1016 17:44:25.204080   13712 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-431183"
	I1016 17:44:25.204098   13712 addons.go:238] Setting addon metrics-server=true in "addons-431183"
	I1016 17:44:25.204105   13712 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-431183"
	I1016 17:44:25.204127   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.204247   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204398   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204403   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204424   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204541   13712 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-431183"
	I1016 17:44:25.204559   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204656   13712 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-431183"
	I1016 17:44:25.204682   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.205132   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.205192   13712 addons.go:69] Setting registry=true in profile "addons-431183"
	I1016 17:44:25.205460   13712 addons.go:238] Setting addon registry=true in "addons-431183"
	I1016 17:44:25.205489   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.205962   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.206245   13712 addons.go:69] Setting inspektor-gadget=true in profile "addons-431183"
	I1016 17:44:25.206279   13712 addons.go:238] Setting addon inspektor-gadget=true in "addons-431183"
	I1016 17:44:25.206314   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.206667   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.203853   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.209761   13712 addons.go:69] Setting volcano=true in profile "addons-431183"
	I1016 17:44:25.209819   13712 addons.go:238] Setting addon volcano=true in "addons-431183"
	I1016 17:44:25.209889   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.210044   13712 addons.go:69] Setting volumesnapshots=true in profile "addons-431183"
	I1016 17:44:25.210074   13712 addons.go:238] Setting addon volumesnapshots=true in "addons-431183"
	I1016 17:44:25.210109   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.210391   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210551   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210587   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210666   13712 out.go:179] * Verifying Kubernetes components...
	I1016 17:44:25.203869   13712 addons.go:69] Setting gcp-auth=true in profile "addons-431183"
	I1016 17:44:25.211151   13712 mustload.go:65] Loading cluster: addons-431183
	I1016 17:44:25.211330   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:25.211560   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.213786   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:25.213952   13712 addons.go:69] Setting cloud-spanner=true in profile "addons-431183"
	I1016 17:44:25.213970   13712 addons.go:238] Setting addon cloud-spanner=true in "addons-431183"
	I1016 17:44:25.214001   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.214117   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.214176   13712 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-431183"
	I1016 17:44:25.214266   13712 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-431183"
	I1016 17:44:25.205408   13712 addons.go:69] Setting storage-provisioner=true in profile "addons-431183"
	I1016 17:44:25.214303   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.214320   13712 addons.go:238] Setting addon storage-provisioner=true in "addons-431183"
	I1016 17:44:25.214358   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203891   13712 addons.go:238] Setting addon ingress=true in "addons-431183"
	I1016 17:44:25.216917   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.217425   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219417   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219789   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219938   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.256764   13712 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1016 17:44:25.258073   13712 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:44:25.258094   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1016 17:44:25.258167   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.267563   13712 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1016 17:44:25.273781   13712 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:44:25.274239   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1016 17:44:25.275103   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.275355   13712 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1016 17:44:25.280901   13712 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:44:25.280927   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1016 17:44:25.280987   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.284099   13712 out.go:179]   - Using image docker.io/registry:3.0.0
	I1016 17:44:25.285533   13712 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1016 17:44:25.287000   13712 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1016 17:44:25.287027   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1016 17:44:25.287084   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.289888   13712 addons.go:238] Setting addon default-storageclass=true in "addons-431183"
	I1016 17:44:25.290627   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.292851   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.294732   13712 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1016 17:44:25.296068   13712 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:44:25.296087   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1016 17:44:25.296140   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.301185   13712 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-431183"
	I1016 17:44:25.301244   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.301726   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	W1016 17:44:25.312040   13712 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1016 17:44:25.317397   13712 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1016 17:44:25.319607   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1016 17:44:25.319661   13712 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1016 17:44:25.319737   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.330100   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.333361   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:25.334796   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1016 17:44:25.338227   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:25.339887   13712 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:44:25.339904   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1016 17:44:25.339962   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.346060   13712 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1016 17:44:25.346426   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1016 17:44:25.347532   13712 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1016 17:44:25.349959   13712 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1016 17:44:25.350033   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.352513   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1016 17:44:25.352517   13712 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 17:44:25.357686   13712 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:44:25.357726   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 17:44:25.357790   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.360254   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1016 17:44:25.361773   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1016 17:44:25.363748   13712 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1016 17:44:25.364410   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1016 17:44:25.366819   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1016 17:44:25.368023   13712 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1016 17:44:25.369583   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1016 17:44:25.369672   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.370929   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1016 17:44:25.372617   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1016 17:44:25.374318   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1016 17:44:25.374341   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1016 17:44:25.374401   13712 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1016 17:44:25.374420   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.378465   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1016 17:44:25.378572   13712 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1016 17:44:25.378655   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.385225   13712 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 17:44:25.385248   13712 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 17:44:25.385302   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.385438   13712 out.go:179]   - Using image docker.io/busybox:stable
	I1016 17:44:25.389383   13712 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1016 17:44:25.389991   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.391367   13712 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:44:25.391391   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1016 17:44:25.391455   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.392182   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.394667   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1016 17:44:25.397118   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1016 17:44:25.397146   13712 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1016 17:44:25.397210   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.399846   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.406025   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.408196   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.409502   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 17:44:25.410007   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.437617   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.439834   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.439937   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.441076   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.452923   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.455587   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.456553   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.457333   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.473832   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.509280   13712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:44:25.597402   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:44:25.608167   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1016 17:44:25.608193   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1016 17:44:25.614207   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:44:25.614507   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1016 17:44:25.614526   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1016 17:44:25.633492   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1016 17:44:25.633521   13712 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1016 17:44:25.643482   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1016 17:44:25.643512   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1016 17:44:25.653213   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:44:25.658151   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1016 17:44:25.661783   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 17:44:25.672536   13712 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1016 17:44:25.672564   13712 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1016 17:44:25.674981   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:44:25.675005   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:44:25.674986   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:44:25.676493   13712 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:25.676512   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1016 17:44:25.678587   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1016 17:44:25.678605   13712 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1016 17:44:25.679729   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1016 17:44:25.679756   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1016 17:44:25.680641   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:44:25.681663   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:44:25.681680   13712 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1016 17:44:25.683573   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1016 17:44:25.683594   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1016 17:44:25.717266   13712 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:44:25.717301   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1016 17:44:25.724422   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:44:25.728882   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:25.733209   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1016 17:44:25.733234   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1016 17:44:25.735015   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1016 17:44:25.735043   13712 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1016 17:44:25.735456   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1016 17:44:25.735474   13712 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1016 17:44:25.763247   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:44:25.784981   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1016 17:44:25.785008   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1016 17:44:25.786123   13712 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:44:25.786175   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1016 17:44:25.809104   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1016 17:44:25.809206   13712 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1016 17:44:25.849160   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:44:25.849219   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1016 17:44:25.860886   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:44:25.884159   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1016 17:44:25.884252   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1016 17:44:25.888351   13712 node_ready.go:35] waiting up to 6m0s for node "addons-431183" to be "Ready" ...
	I1016 17:44:25.888617   13712 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1016 17:44:25.908248   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:44:25.926193   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1016 17:44:25.926306   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1016 17:44:26.018493   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1016 17:44:26.018580   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1016 17:44:26.058191   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1016 17:44:26.058220   13712 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1016 17:44:26.114364   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1016 17:44:26.114387   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1016 17:44:26.152515   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1016 17:44:26.152537   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1016 17:44:26.192501   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:44:26.192526   13712 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1016 17:44:26.230106   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:44:26.394118   13712 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-431183" context rescaled to 1 replicas
	I1016 17:44:26.849228   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.174172857s)
	I1016 17:44:26.849266   13712 addons.go:479] Verifying addon ingress=true in "addons-431183"
	I1016 17:44:26.849293   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.1742841s)
	I1016 17:44:26.849646   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174620266s)
	I1016 17:44:26.849703   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.169006801s)
	I1016 17:44:26.849849   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.125397728s)
	I1016 17:44:26.849872   13712 addons.go:479] Verifying addon metrics-server=true in "addons-431183"
	I1016 17:44:26.849964   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121060729s)
	W1016 17:44:26.849992   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:26.850009   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.086732275s)
	I1016 17:44:26.850020   13712 retry.go:31] will retry after 304.232314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:26.850023   13712 addons.go:479] Verifying addon registry=true in "addons-431183"
	I1016 17:44:26.850857   13712 out.go:179] * Verifying ingress addon...
	I1016 17:44:26.851811   13712 out.go:179] * Verifying registry addon...
	I1016 17:44:26.854587   13712 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1016 17:44:26.855384   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1016 17:44:26.859755   13712 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1016 17:44:26.859849   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:26.860174   13712 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 17:44:26.860379   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:27.154590   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:27.290879   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.382591673s)
	I1016 17:44:27.291039   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.430106882s)
	W1016 17:44:27.291086   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:44:27.291110   13712 retry.go:31] will retry after 241.886996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:44:27.291141   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.060977615s)
	I1016 17:44:27.291188   13712 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-431183"
	I1016 17:44:27.292464   13712 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-431183 service yakd-dashboard -n yakd-dashboard
	
	I1016 17:44:27.293448   13712 out.go:179] * Verifying csi-hostpath-driver addon...
	I1016 17:44:27.296094   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1016 17:44:27.303793   13712 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 17:44:27.303819   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:27.406789   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:27.406919   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:27.533214   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1016 17:44:27.769568   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:27.769602   13712 retry.go:31] will retry after 340.975823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:27.799003   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:27.857654   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:27.857952   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:27.891355   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:28.111295   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:28.299454   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:28.399843   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:28.399985   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:28.799683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:28.858153   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:28.858370   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:29.299604   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:29.357896   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:29.358037   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:29.799369   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:29.900557   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:29.900816   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.029628   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.496362298s)
	I1016 17:44:30.029696   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.918349698s)
	W1016 17:44:30.029747   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:30.029768   13712 retry.go:31] will retry after 642.750032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:30.299581   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:44:30.391201   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:30.399745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:30.399837   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.672862   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:30.799815   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:30.857867   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.858170   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:31.199645   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:31.199676   13712 retry.go:31] will retry after 672.617502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:31.299871   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:31.400933   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:31.401038   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:31.799367   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:31.857860   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:31.857980   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:31.872993   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:32.300104   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:44:32.391608   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:32.400945   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:32.401162   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:32.417530   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:32.417561   13712 retry.go:31] will retry after 1.622996807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:32.799595   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:32.857504   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:32.857901   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:32.938611   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1016 17:44:32.938671   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:32.957192   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:33.061768   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1016 17:44:33.074835   13712 addons.go:238] Setting addon gcp-auth=true in "addons-431183"
	I1016 17:44:33.074891   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:33.075283   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:33.092937   13712 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1016 17:44:33.092979   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:33.111570   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:33.208878   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:33.210665   13712 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1016 17:44:33.212357   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1016 17:44:33.212379   13712 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1016 17:44:33.225763   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1016 17:44:33.225793   13712 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1016 17:44:33.238601   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:44:33.238620   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1016 17:44:33.251598   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:44:33.299793   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:33.357896   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:33.358516   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:33.568984   13712 addons.go:479] Verifying addon gcp-auth=true in "addons-431183"
	I1016 17:44:33.571329   13712 out.go:179] * Verifying gcp-auth addon...
	I1016 17:44:33.573693   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1016 17:44:33.576324   13712 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1016 17:44:33.576339   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:33.799034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:33.857542   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:33.857745   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.041326   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:34.076266   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:34.299452   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:34.357506   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.358299   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:34.391838   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	W1016 17:44:34.570099   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:34.570137   13712 retry.go:31] will retry after 2.042622617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:34.577131   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:34.798686   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:34.857346   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.857907   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:35.076931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:35.298681   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:35.358036   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:35.358271   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:35.576424   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:35.799076   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:35.857592   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:35.858381   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.076664   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:36.299307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:36.357779   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.357975   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:36.577585   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:36.613791   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:36.799325   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:36.857777   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.857844   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:36.891648   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:37.076605   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:44:37.147533   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:37.147568   13712 retry.go:31] will retry after 3.288066533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:37.299214   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:37.358025   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:37.358190   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:37.576913   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:37.799411   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:37.858024   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:37.858286   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:38.076522   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:38.299032   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:38.357932   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:38.358456   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:38.576609   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:38.799474   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:38.858031   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:38.858171   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:39.077328   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:39.298983   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:39.357425   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:39.358404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:39.391026   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:39.576626   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:39.799205   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:39.857968   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:39.858146   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.077057   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:40.299686   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:40.357308   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.358133   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:40.436741   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:40.577897   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:40.798895   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:40.857563   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.858205   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:40.967760   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:40.967789   13712 retry.go:31] will retry after 5.688643093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:41.076216   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:41.298678   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:41.357341   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:41.357930   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:41.391446   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:41.577307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:41.798930   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:41.857876   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:41.858255   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:42.076367   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:42.299068   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:42.357745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:42.357844   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:42.577398   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:42.799786   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:42.857657   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:42.857933   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:43.076982   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:43.299842   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:43.357628   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:43.358381   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:43.391798   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:43.576657   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:43.799228   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:43.857762   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:43.857991   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.076786   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:44.299404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:44.357922   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:44.357977   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.577554   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:44.799151   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:44.857637   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.858224   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:45.077362   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:45.299134   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:45.357829   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:45.358028   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:45.577100   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:45.799816   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:45.857901   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:45.858510   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:45.890877   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:46.076476   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:46.299155   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:46.357693   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:46.357817   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:46.576482   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:46.656552   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:46.799313   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:46.857756   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:46.858010   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.076839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:44:47.183257   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:47.183282   13712 retry.go:31] will retry after 4.644458726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:47.298815   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:47.357656   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.357862   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:47.576876   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:47.799742   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:47.857524   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.858011   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:47.891570   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:48.077089   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:48.298669   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:48.358183   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:48.358443   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:48.577285   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:48.799448   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:48.857899   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:48.858030   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:49.077745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:49.299641   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:49.357178   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:49.357645   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:49.577479   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:49.799179   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:49.857918   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:49.858143   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:50.077140   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:50.299196   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:50.357899   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:50.357911   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:50.391414   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:50.577316   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:50.798849   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:50.857515   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:50.857752   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:51.076745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:51.299571   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:51.358122   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:51.358227   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:51.577314   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:51.799142   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:51.828341   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:51.857320   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:51.858023   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:52.077063   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:52.298303   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:52.359096   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:52.359327   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:52.363382   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:52.363461   13712 retry.go:31] will retry after 13.305923226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 17:44:52.392011   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:52.576679   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:52.799555   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:52.858158   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:52.858319   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.077119   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:53.298750   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:53.357011   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.357764   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:53.577188   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:53.798795   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:53.857402   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.857750   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.077213   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:54.299084   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:54.357888   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:54.358487   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.576628   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:54.799326   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:54.857772   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.857864   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:54.891258   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:55.076878   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:55.298748   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:55.357252   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:55.357698   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:55.576926   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:55.799334   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:55.857768   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:55.857840   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.077031   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:56.299848   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:56.357449   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.357913   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:56.577150   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:56.799127   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:56.857656   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.858133   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:56.891440   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:57.077171   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:57.298669   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:57.358145   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:57.358345   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:57.577219   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:57.798744   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:57.857186   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:57.857932   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:58.078134   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:58.298706   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:58.357392   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:58.357971   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:58.577244   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:58.798742   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:58.857138   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:58.857843   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.076847   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:59.299372   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:59.357836   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.358039   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:59.391312   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:59.576782   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:59.799383   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:59.858017   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.858079   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:00.077055   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:00.299167   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:00.357616   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:00.357842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:00.576839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:00.799632   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:00.857994   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:00.858040   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:01.076943   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:01.299594   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:01.358038   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:01.358103   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:01.391513   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:01.577126   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:01.799265   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:01.857656   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:01.857842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:02.076940   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:02.299771   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:02.357383   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:02.357972   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:02.577328   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:02.799270   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:02.857865   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:02.858009   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:03.077360   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:03.299129   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:03.357925   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:03.358243   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:45:03.392063   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:03.576638   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:03.799112   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:03.857639   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:03.857768   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:04.077237   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:04.298788   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:04.357327   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:04.357931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:04.576949   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:04.799548   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:04.858006   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:04.858133   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:05.077311   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:05.299172   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:05.357687   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:05.357917   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:05.577018   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:05.670252   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:05.799425   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:05.858234   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:05.858476   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:05.891917   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:06.076988   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:45:06.209029   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:06.209061   13712 retry.go:31] will retry after 13.152751955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:06.299590   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:06.358201   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:06.358244   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:06.578759   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:06.799289   13712 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 17:45:06.799309   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:06.858099   13712 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 17:45:06.858124   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:06.858289   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:06.892382   13712 node_ready.go:49] node "addons-431183" is "Ready"
	I1016 17:45:06.892414   13712 node_ready.go:38] duration metric: took 41.004036419s for node "addons-431183" to be "Ready" ...
	I1016 17:45:06.892429   13712 api_server.go:52] waiting for apiserver process to appear ...
	I1016 17:45:06.892485   13712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 17:45:06.913453   13712 api_server.go:72] duration metric: took 41.709838831s to wait for apiserver process to appear ...
	I1016 17:45:06.913483   13712 api_server.go:88] waiting for apiserver healthz status ...
	I1016 17:45:06.913504   13712 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 17:45:06.918675   13712 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 17:45:06.920200   13712 api_server.go:141] control plane version: v1.34.1
	I1016 17:45:06.920316   13712 api_server.go:131] duration metric: took 6.824153ms to wait for apiserver health ...
	I1016 17:45:06.920339   13712 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 17:45:06.963739   13712 system_pods.go:59] 20 kube-system pods found
	I1016 17:45:06.963784   13712 system_pods.go:61] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:06.963796   13712 system_pods.go:61] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:06.963806   13712 system_pods.go:61] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:06.963814   13712 system_pods.go:61] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:06.963822   13712 system_pods.go:61] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:06.963828   13712 system_pods.go:61] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:06.963835   13712 system_pods.go:61] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:06.963841   13712 system_pods.go:61] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:06.963846   13712 system_pods.go:61] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:06.963854   13712 system_pods.go:61] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:06.963860   13712 system_pods.go:61] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:06.963865   13712 system_pods.go:61] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:06.963872   13712 system_pods.go:61] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:06.963895   13712 system_pods.go:61] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:06.963908   13712 system_pods.go:61] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:06.963917   13712 system_pods.go:61] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:06.963925   13712 system_pods.go:61] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:06.963935   13712 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:06.963945   13712 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:06.963952   13712 system_pods.go:61] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:06.963960   13712 system_pods.go:74] duration metric: took 43.572634ms to wait for pod list to return data ...
	I1016 17:45:06.963971   13712 default_sa.go:34] waiting for default service account to be created ...
	I1016 17:45:06.966910   13712 default_sa.go:45] found service account: "default"
	I1016 17:45:06.966939   13712 default_sa.go:55] duration metric: took 2.961133ms for default service account to be created ...
	I1016 17:45:06.966948   13712 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 17:45:07.062187   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.062233   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.062252   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.062263   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.062278   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.062296   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.062308   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.062316   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.062327   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.062332   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.062353   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.062369   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.062375   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.062384   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.062403   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.062412   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.062424   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.062437   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.062449   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.062464   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.062473   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.062496   13712 retry.go:31] will retry after 189.830369ms: missing components: kube-dns
	I1016 17:45:07.076872   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:07.258945   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.258979   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.258989   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.258999   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.259008   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.259025   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.259035   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.259042   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.259051   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.259057   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.259070   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.259078   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.259084   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.259092   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.259103   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.259114   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.259125   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.259133   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.259144   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.259153   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.259164   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.259181   13712 retry.go:31] will retry after 351.861677ms: missing components: kube-dns
	I1016 17:45:07.299537   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:07.358683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:07.358904   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:07.577578   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:07.615929   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.615961   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.615972   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.615982   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.615991   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.616005   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.616011   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.616027   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.616037   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.616042   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.616053   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.616057   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.616065   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.616073   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.616082   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.616096   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.616104   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.616114   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.616125   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.616136   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.616146   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.616165   13712 retry.go:31] will retry after 306.922072ms: missing components: kube-dns
	I1016 17:45:07.800683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:07.858828   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:07.858841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:07.928380   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.928416   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.928434   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Running
	I1016 17:45:07.928445   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.928453   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.928463   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.928474   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.928480   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.928489   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.928495   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.928509   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.928515   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.928524   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.928532   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.928545   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.928557   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.928565   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.928577   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.928587   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.928603   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.928609   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Running
	I1016 17:45:07.928622   13712 system_pods.go:126] duration metric: took 961.666538ms to wait for k8s-apps to be running ...
	I1016 17:45:07.928634   13712 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 17:45:07.928684   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 17:45:07.945426   13712 system_svc.go:56] duration metric: took 16.775044ms WaitForService to wait for kubelet
	I1016 17:45:07.945456   13712 kubeadm.go:586] duration metric: took 42.741848123s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:45:07.945481   13712 node_conditions.go:102] verifying NodePressure condition ...
	I1016 17:45:07.948757   13712 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 17:45:07.948787   13712 node_conditions.go:123] node cpu capacity is 8
	I1016 17:45:07.948803   13712 node_conditions.go:105] duration metric: took 3.316577ms to run NodePressure ...
	I1016 17:45:07.948814   13712 start.go:241] waiting for startup goroutines ...
	I1016 17:45:08.077904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:08.300102   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:08.401841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:08.403842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:08.577562   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:08.800154   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:08.859375   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:08.861087   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.078356   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:09.299218   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:09.358500   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:09.359122   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.577317   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:09.801263   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:09.858471   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.858526   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.077389   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:10.300270   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:10.402929   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.402927   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:10.576997   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:10.800924   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:10.858147   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.858201   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:11.077515   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:11.300178   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:11.358128   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:11.358236   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:11.576779   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:11.799682   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:11.857638   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:11.858243   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.077428   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:12.299776   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:12.358166   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.358211   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:12.577518   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:12.799560   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:12.858618   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.858878   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:13.080610   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:13.300257   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:13.357950   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:13.357954   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:13.576896   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:13.799455   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:13.900065   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:13.900078   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:14.076852   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:14.299757   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:14.358382   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:14.358437   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:14.577557   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:14.800042   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:14.901450   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:14.901478   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.077307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:15.300740   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:15.358385   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:15.358557   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.576627   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:15.799967   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:15.857664   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.857931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:16.077189   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:16.299582   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:16.358242   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:16.358396   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:16.577413   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:16.799914   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:16.857517   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:16.858144   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:17.077167   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:17.300060   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:17.357639   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:17.358030   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:17.577171   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:17.799556   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:17.858440   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:17.858531   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.077700   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:18.300365   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:18.358602   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:18.358638   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.577627   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:18.800110   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:18.858279   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.858527   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.077531   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:19.300372   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:19.358331   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.358528   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:19.362658   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:19.576904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:19.801504   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:19.857512   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.858037   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:45:19.967203   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:19.967239   13712 retry.go:31] will retry after 30.485247864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:20.077623   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:20.300648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:20.358400   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:20.358423   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:20.577218   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:20.799070   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:20.857751   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:20.858231   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.077518   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:21.299824   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:21.358681   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:21.358735   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.577445   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:21.799523   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:21.857863   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.857867   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:22.076791   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:22.299881   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:22.357706   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:22.357904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:22.577239   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:22.799400   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:22.859010   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:22.859104   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.076922   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:23.300890   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:23.357870   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.358274   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.576648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:23.799795   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:23.858974   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.859159   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.076835   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:24.299932   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.357606   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.358142   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:24.576907   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:24.799593   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.857981   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.858008   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.110886   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:25.299961   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.357582   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.358202   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.577007   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:25.799550   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.859259   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.860615   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.078468   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.300071   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.357785   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:26.358330   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.577445   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.800034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.858537   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.858642   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.077707   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.372329   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:27.372407   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.372432   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.577629   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.800285   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.858036   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.858085   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.076839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:28.300384   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:28.358153   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:28.358294   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.577661   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:28.802246   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:28.869380   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.869475   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.076937   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.300564   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.358512   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:29.358572   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.577478   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.800121   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.858167   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.858454   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.077383   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.299743   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.358422   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.358469   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:30.686620   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.799883   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.857393   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:30.858006   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.076647   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.346316   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.357690   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.357901   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:31.576432   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.800242   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.901289   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.901354   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:32.077376   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.299103   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.357910   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:32.358422   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.576993   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.800747   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.901272   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.901891   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.077173   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.300370   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.401489   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:33.401620   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.577536   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.800054   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.858084   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.858232   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.077145   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.299026   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.357615   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:34.358314   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.577155   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.799009   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.858034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.858372   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.076909   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.309581   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.358797   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.358919   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.577305   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.799780   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.858509   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.858768   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.077428   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.309520   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.359470   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:36.359664   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.578471   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.800195   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.885941   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.990204   13712 kapi.go:107] duration metric: took 1m10.134815412s to wait for kubernetes.io/minikube-addons=registry ...
	I1016 17:45:37.212391   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.299369   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.358396   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:37.577138   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.799667   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.900319   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.076974   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.300176   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.358203   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.577699   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.815466   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.883225   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.076841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.338648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.358737   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.577552   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.799799   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.861162   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.077095   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.299282   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.358423   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.577224   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.799825   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.857745   13712 kapi.go:107] duration metric: took 1m14.003152504s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1016 17:45:41.076882   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.300389   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:41.682404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.799792   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.076483   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.299769   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.576516   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.799969   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:43.076817   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.300038   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:43.577002   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.799545   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.077385   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.299240   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.577366   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.799892   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.076681   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.299551   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.577254   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.799257   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.076974   13712 kapi.go:107] duration metric: took 1m12.503275751s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1016 17:45:46.078923   13712 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-431183 cluster.
	I1016 17:45:46.080989   13712 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1016 17:45:46.082058   13712 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1016 17:45:46.300498   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.799670   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.300569   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.799214   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:48.300023   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:48.799210   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:49.299899   13712 kapi.go:107] duration metric: took 1m22.003802781s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1016 17:45:50.452862   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1016 17:45:50.996759   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 17:45:50.996857   13712 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1016 17:45:50.999549   13712 out.go:179] * Enabled addons: registry-creds, ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, amd-gpu-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1016 17:45:51.001122   13712 addons.go:514] duration metric: took 1m25.797455382s for enable addons: enabled=[registry-creds ingress-dns nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner amd-gpu-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1016 17:45:51.001174   13712 start.go:246] waiting for cluster config update ...
	I1016 17:45:51.001197   13712 start.go:255] writing updated cluster config ...
	I1016 17:45:51.001522   13712 ssh_runner.go:195] Run: rm -f paused
	I1016 17:45:51.006259   13712 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:45:51.009658   13712 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-75dtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.014060   13712 pod_ready.go:94] pod "coredns-66bc5c9577-75dtc" is "Ready"
	I1016 17:45:51.014089   13712 pod_ready.go:86] duration metric: took 4.410303ms for pod "coredns-66bc5c9577-75dtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.015922   13712 pod_ready.go:83] waiting for pod "etcd-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.019406   13712 pod_ready.go:94] pod "etcd-addons-431183" is "Ready"
	I1016 17:45:51.019424   13712 pod_ready.go:86] duration metric: took 3.485204ms for pod "etcd-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.021180   13712 pod_ready.go:83] waiting for pod "kube-apiserver-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.024534   13712 pod_ready.go:94] pod "kube-apiserver-addons-431183" is "Ready"
	I1016 17:45:51.024558   13712 pod_ready.go:86] duration metric: took 3.356895ms for pod "kube-apiserver-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.026249   13712 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.410357   13712 pod_ready.go:94] pod "kube-controller-manager-addons-431183" is "Ready"
	I1016 17:45:51.410391   13712 pod_ready.go:86] duration metric: took 384.117954ms for pod "kube-controller-manager-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.610570   13712 pod_ready.go:83] waiting for pod "kube-proxy-kxgwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.010346   13712 pod_ready.go:94] pod "kube-proxy-kxgwk" is "Ready"
	I1016 17:45:52.010374   13712 pod_ready.go:86] duration metric: took 399.782985ms for pod "kube-proxy-kxgwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.210817   13712 pod_ready.go:83] waiting for pod "kube-scheduler-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.610356   13712 pod_ready.go:94] pod "kube-scheduler-addons-431183" is "Ready"
	I1016 17:45:52.610391   13712 pod_ready.go:86] duration metric: took 399.549305ms for pod "kube-scheduler-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.610405   13712 pod_ready.go:40] duration metric: took 1.604114134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:45:52.654980   13712 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 17:45:52.657098   13712 out.go:179] * Done! kubectl is now configured to use "addons-431183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 17:48:37 addons-431183 crio[771]: time="2025-10-16T17:48:37.985809439Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-rxdwk/POD" id=74848c29-5635-4521-9b85-4c2257813b1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 17:48:37 addons-431183 crio[771]: time="2025-10-16T17:48:37.985936876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:48:37 addons-431183 crio[771]: time="2025-10-16T17:48:37.992624374Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rxdwk Namespace:default ID:0615c202baafd2be2e811ad7b65575ad45a108c61bb066dda37c3d157a8d3984 UID:03f9592a-bf47-41eb-88ff-49982b617c64 NetNS:/var/run/netns/82fe20b0-18cc-4d08-8d9d-02cba842b0cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001337f8}] Aliases:map[]}"
	Oct 16 17:48:37 addons-431183 crio[771]: time="2025-10-16T17:48:37.992665837Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-rxdwk to CNI network \"kindnet\" (type=ptp)"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.003205983Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-rxdwk Namespace:default ID:0615c202baafd2be2e811ad7b65575ad45a108c61bb066dda37c3d157a8d3984 UID:03f9592a-bf47-41eb-88ff-49982b617c64 NetNS:/var/run/netns/82fe20b0-18cc-4d08-8d9d-02cba842b0cc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001337f8}] Aliases:map[]}"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.003371429Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-rxdwk for CNI network kindnet (type=ptp)"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.004376144Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.005245159Z" level=info msg="Ran pod sandbox 0615c202baafd2be2e811ad7b65575ad45a108c61bb066dda37c3d157a8d3984 with infra container: default/hello-world-app-5d498dc89-rxdwk/POD" id=74848c29-5635-4521-9b85-4c2257813b1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.006522338Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7df3a1a0-64bf-45cf-926c-ead21e2b3908 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.00665676Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7df3a1a0-64bf-45cf-926c-ead21e2b3908 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.006707866Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7df3a1a0-64bf-45cf-926c-ead21e2b3908 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.007417405Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=67c9a935-513b-4b38-9184-45829dc2b974 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.013414636Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.364962732Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=67c9a935-513b-4b38-9184-45829dc2b974 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.365640426Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=729838f0-c24b-459d-9624-e75c96e2e543 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.367349929Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2b649868-91eb-49c7-a384-0fd733e05611 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.371216193Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-rxdwk/hello-world-app" id=0ef9918f-0697-466b-a6b5-4a63fb904c3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.371939054Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.378577156Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.378825099Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/32964bf34bc4e59b9ce0dc0d1f0d48c2dd3a777c7e9f9cc570433a910eb6a156/merged/etc/passwd: no such file or directory"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.378974757Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/32964bf34bc4e59b9ce0dc0d1f0d48c2dd3a777c7e9f9cc570433a910eb6a156/merged/etc/group: no such file or directory"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.37931428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.406982763Z" level=info msg="Created container 9b51569f8dcf6ea61c945ec4474fdd09fa5149387d155ea1292deab0c7db70a2: default/hello-world-app-5d498dc89-rxdwk/hello-world-app" id=0ef9918f-0697-466b-a6b5-4a63fb904c3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.40765373Z" level=info msg="Starting container: 9b51569f8dcf6ea61c945ec4474fdd09fa5149387d155ea1292deab0c7db70a2" id=d8a59b9b-fbcb-4cba-932a-3608752d0af8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 17:48:38 addons-431183 crio[771]: time="2025-10-16T17:48:38.409917184Z" level=info msg="Started container" PID=9787 containerID=9b51569f8dcf6ea61c945ec4474fdd09fa5149387d155ea1292deab0c7db70a2 description=default/hello-world-app-5d498dc89-rxdwk/hello-world-app id=d8a59b9b-fbcb-4cba-932a-3608752d0af8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0615c202baafd2be2e811ad7b65575ad45a108c61bb066dda37c3d157a8d3984
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9b51569f8dcf6       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   0615c202baafd       hello-world-app-5d498dc89-rxdwk             default
	e4db5246739f6       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   79a53f9d9eb64       registry-creds-764b6fb674-4sqn6             kube-system
	2326cbe36286a       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   ae668a5869f8e       nginx                                       default
	a3c5b6e2d8d5e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   0c1cb091f2163       busybox                                     default
	5de201fd76a95       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	d2b409cc61d3e       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	08fc54b7ecf7c       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	65f92eb5c9126       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	91299ba87caea       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   adb3b0e94c263       gcp-auth-78565c9fb4-bjwlm                   gcp-auth
	99bd9e93e1a1c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	f92ce88c96cd1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            2 minutes ago            Running             gadget                                   0                   dfd40a0715837       gadget-rwgd7                                gadget
	b8426a978ff8c       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago            Running             controller                               0                   c4b2956e6d733       ingress-nginx-controller-675c5ddd98-5qwrf   ingress-nginx
	38a6424f0235c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   7128c4c1360b4       registry-proxy-r2qlf                        kube-system
	d2446d21f394d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	a6e738e35332b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   6012a442be78d       amd-gpu-device-plugin-6bmbl                 kube-system
	b7a0a3afc5b5e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   1bfa9a3b7995a       nvidia-device-plugin-daemonset-kcsqr        kube-system
	cbbc3b73b7dda       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   77f4aad1b6981       snapshot-controller-7d9fbc56b8-d7fm5        kube-system
	dcfdf0dfc495c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   49157714bbb57       csi-hostpath-attacher-0                     kube-system
	cc78d2815338b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   13541f8c7b072       snapshot-controller-7d9fbc56b8-tbv8w        kube-system
	7f6105c26156d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   98207222fd8e0       ingress-nginx-admission-patch-54q7q         ingress-nginx
	b29b337a76127       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   83632961c120a       ingress-nginx-admission-create-74xz8        ingress-nginx
	e825d0a32cabb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   0d010a48c2a8a       csi-hostpath-resizer-0                      kube-system
	c1ee69de8a39e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   c7bdd7d1294f3       yakd-dashboard-5ff678cb9-6dx84              yakd-dashboard
	f272694b208ca       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   0d06f97ffa6a6       local-path-provisioner-648f6765c9-vrpng     local-path-storage
	d489a26138352       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   f5c45c23a757e       cloud-spanner-emulator-86bd5cbb97-6ncpk     default
	eec1c645d1dfa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   e5a389c40cfcb       registry-6b586f9694-4gxbm                   kube-system
	8eb1df0ef8e8f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   9124f5ca54c18       kube-ingress-dns-minikube                   kube-system
	eeac328352576       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   b7082ce4753e7       metrics-server-85b7d694d7-m2l65             kube-system
	57066b2143979       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   7bb941800e21f       coredns-66bc5c9577-75dtc                    kube-system
	a03f0987c6223       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   9e1a1cf5d489f       storage-provisioner                         kube-system
	41d8ee3133047       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   55d838a89f39f       kindnet-xm247                               kube-system
	45684000aebf9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   9c5f6ff04e599       kube-proxy-kxgwk                            kube-system
	b6296707185d3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   2e5c5c78c42ed       etcd-addons-431183                          kube-system
	dff4028c6cade       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   3c50e6a4c57be       kube-apiserver-addons-431183                kube-system
	9ddd87f44d89a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   dee15eaa106c4       kube-controller-manager-addons-431183       kube-system
	11a2ed25b01f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   bc9dea3df5e23       kube-scheduler-addons-431183                kube-system
	
	
	==> coredns [57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0] <==
	[INFO] 10.244.0.22:40245 - 57371 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007524606s
	[INFO] 10.244.0.22:53825 - 43335 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005256461s
	[INFO] 10.244.0.22:48790 - 64380 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007972273s
	[INFO] 10.244.0.22:57147 - 52564 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006866058s
	[INFO] 10.244.0.22:41544 - 21263 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007119013s
	[INFO] 10.244.0.22:42198 - 55479 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000996149s
	[INFO] 10.244.0.22:47993 - 40381 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001133847s
	[INFO] 10.244.0.26:60949 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000222406s
	[INFO] 10.244.0.26:57406 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000135851s
	[INFO] 10.244.0.31:49394 - 1570 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000241513s
	[INFO] 10.244.0.31:51493 - 4005 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000320623s
	[INFO] 10.244.0.31:40964 - 27834 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000132921s
	[INFO] 10.244.0.31:43652 - 43604 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000150899s
	[INFO] 10.244.0.31:58967 - 23684 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000188447s
	[INFO] 10.244.0.31:54531 - 11338 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000263198s
	[INFO] 10.244.0.31:47985 - 33969 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004561417s
	[INFO] 10.244.0.31:46034 - 33674 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004715209s
	[INFO] 10.244.0.31:46650 - 10115 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005102796s
	[INFO] 10.244.0.31:36615 - 48413 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005917911s
	[INFO] 10.244.0.31:42875 - 61205 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004804535s
	[INFO] 10.244.0.31:39263 - 39420 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005858044s
	[INFO] 10.244.0.31:39280 - 20275 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006075808s
	[INFO] 10.244.0.31:48224 - 38826 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006852407s
	[INFO] 10.244.0.31:55098 - 47153 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001792027s
	[INFO] 10.244.0.31:58478 - 20821 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001975209s
	
	
	==> describe nodes <==
	Name:               addons-431183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-431183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=addons-431183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T17_44_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-431183
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-431183"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 17:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-431183
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 17:48:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 17:48:24 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 17:48:24 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 17:48:24 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 17:48:24 +0000   Thu, 16 Oct 2025 17:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-431183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                067683fc-48c6-4d92-80f9-6bb27411d961
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  default                     cloud-spanner-emulator-86bd5cbb97-6ncpk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  default                     hello-world-app-5d498dc89-rxdwk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-rwgd7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  gcp-auth                    gcp-auth-78565c9fb4-bjwlm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5qwrf    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m13s
	  kube-system                 amd-gpu-device-plugin-6bmbl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 coredns-66bc5c9577-75dtc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m14s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 csi-hostpathplugin-lwfnt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 etcd-addons-431183                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m21s
	  kube-system                 kindnet-xm247                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m14s
	  kube-system                 kube-apiserver-addons-431183                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-addons-431183        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-kxgwk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-addons-431183                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 metrics-server-85b7d694d7-m2l65              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m13s
	  kube-system                 nvidia-device-plugin-daemonset-kcsqr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 registry-6b586f9694-4gxbm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 registry-creds-764b6fb674-4sqn6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-proxy-r2qlf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m33s
	  kube-system                 snapshot-controller-7d9fbc56b8-d7fm5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-tbv8w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  local-path-storage          local-path-provisioner-648f6765c9-vrpng      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6dx84               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m12s  kube-proxy       
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node addons-431183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node addons-431183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node addons-431183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m15s  node-controller  Node addons-431183 event: Registered Node addons-431183 in Controller
	  Normal  NodeReady                3m33s  kubelet          Node addons-431183 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2] <==
	{"level":"warn","ts":"2025-10-16T17:44:16.744663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.751112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.757844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.770223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.776671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.782881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.827805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:27.781949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:27.788316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.212445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.219106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.234613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.240985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T17:45:30.685209Z","caller":"traceutil/trace.go:172","msg":"trace[774761099] linearizableReadLoop","detail":"{readStateIndex:1112; appliedIndex:1112; }","duration":"109.206438ms","start":"2025-10-16T17:45:30.575976Z","end":"2025-10-16T17:45:30.685182Z","steps":["trace[774761099] 'read index received'  (duration: 109.199129ms)","trace[774761099] 'applied index is now lower than readState.Index'  (duration: 6.327µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T17:45:30.685331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.328442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:30.685407Z","caller":"traceutil/trace.go:172","msg":"trace[764036054] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"109.421754ms","start":"2025-10-16T17:45:30.575971Z","end":"2025-10-16T17:45:30.685392Z","steps":["trace[764036054] 'agreement among raft nodes before linearized reading'  (duration: 109.283876ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:45:30.685413Z","caller":"traceutil/trace.go:172","msg":"trace[1015217370] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"110.362935ms","start":"2025-10-16T17:45:30.575036Z","end":"2025-10-16T17:45:30.685399Z","steps":["trace[1015217370] 'process raft request'  (duration: 110.203854ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:45:36.988254Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.350532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:36.988319Z","caller":"traceutil/trace.go:172","msg":"trace[782498295] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"130.426718ms","start":"2025-10-16T17:45:36.857878Z","end":"2025-10-16T17:45:36.988305Z","steps":["trace[782498295] 'agreement among raft nodes before linearized reading'  (duration: 73.167239ms)","trace[782498295] 'range keys from in-memory index tree'  (duration: 57.156255ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:36.988579Z","caller":"traceutil/trace.go:172","msg":"trace[824864921] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"159.015549ms","start":"2025-10-16T17:45:36.829548Z","end":"2025-10-16T17:45:36.988563Z","steps":["trace[824864921] 'process raft request'  (duration: 101.491659ms)","trace[824864921] 'compare'  (duration: 57.204922ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T17:45:37.210910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.8176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:37.210970Z","caller":"traceutil/trace.go:172","msg":"trace[1874514587] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"134.887486ms","start":"2025-10-16T17:45:37.076068Z","end":"2025-10-16T17:45:37.210956Z","steps":["trace[1874514587] 'agreement among raft nodes before linearized reading'  (duration: 67.082922ms)","trace[1874514587] 'range keys from in-memory index tree'  (duration: 67.706562ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:37.211126Z","caller":"traceutil/trace.go:172","msg":"trace[932068433] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"206.842153ms","start":"2025-10-16T17:45:37.004266Z","end":"2025-10-16T17:45:37.211108Z","steps":["trace[932068433] 'process raft request'  (duration: 206.72694ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:45:37.211172Z","caller":"traceutil/trace.go:172","msg":"trace[2080297863] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"217.543749ms","start":"2025-10-16T17:45:36.993611Z","end":"2025-10-16T17:45:37.211155Z","steps":["trace[2080297863] 'process raft request'  (duration: 149.582297ms)","trace[2080297863] 'compare'  (duration: 67.690706ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:41.523200Z","caller":"traceutil/trace.go:172","msg":"trace[325051896] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"113.435004ms","start":"2025-10-16T17:45:41.409751Z","end":"2025-10-16T17:45:41.523186Z","steps":["trace[325051896] 'process raft request'  (duration: 113.329749ms)"],"step_count":1}
	
	
	==> gcp-auth [91299ba87caea68119fa480d693dcbde2ce9a5e0369273f86d1a501c683e5e82] <==
	2025/10/16 17:45:45 GCP Auth Webhook started!
	2025/10/16 17:45:52 Ready to marshal response ...
	2025/10/16 17:45:52 Ready to write response ...
	2025/10/16 17:45:53 Ready to marshal response ...
	2025/10/16 17:45:53 Ready to write response ...
	2025/10/16 17:45:53 Ready to marshal response ...
	2025/10/16 17:45:53 Ready to write response ...
	2025/10/16 17:46:07 Ready to marshal response ...
	2025/10/16 17:46:07 Ready to write response ...
	2025/10/16 17:46:07 Ready to marshal response ...
	2025/10/16 17:46:07 Ready to write response ...
	2025/10/16 17:46:11 Ready to marshal response ...
	2025/10/16 17:46:11 Ready to write response ...
	2025/10/16 17:46:12 Ready to marshal response ...
	2025/10/16 17:46:12 Ready to write response ...
	2025/10/16 17:46:14 Ready to marshal response ...
	2025/10/16 17:46:14 Ready to write response ...
	2025/10/16 17:46:22 Ready to marshal response ...
	2025/10/16 17:46:22 Ready to write response ...
	2025/10/16 17:46:48 Ready to marshal response ...
	2025/10/16 17:46:48 Ready to write response ...
	2025/10/16 17:48:37 Ready to marshal response ...
	2025/10/16 17:48:37 Ready to write response ...
	
	
	==> kernel <==
	 17:48:39 up 31 min,  0 user,  load average: 0.29, 0.51, 0.26
	Linux addons-431183 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d] <==
	I1016 17:46:36.154484       1 main.go:301] handling current node
	I1016 17:46:46.162534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:46:46.162576       1 main.go:301] handling current node
	I1016 17:46:56.154891       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:46:56.154925       1 main.go:301] handling current node
	I1016 17:47:06.161811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:06.161853       1 main.go:301] handling current node
	I1016 17:47:16.157362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:16.157423       1 main.go:301] handling current node
	I1016 17:47:26.155040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:26.155064       1 main.go:301] handling current node
	I1016 17:47:36.156734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:36.156770       1 main.go:301] handling current node
	I1016 17:47:46.156375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:46.156423       1 main.go:301] handling current node
	I1016 17:47:56.155647       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:47:56.155694       1 main.go:301] handling current node
	I1016 17:48:06.156357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:48:06.156399       1 main.go:301] handling current node
	I1016 17:48:16.156076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:48:16.156109       1 main.go:301] handling current node
	I1016 17:48:26.155567       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:48:26.155595       1 main.go:301] handling current node
	I1016 17:48:36.154854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:48:36.154893       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856] <==
	 > logger="UnhandledError"
	E1016 17:45:09.776035       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.237.197:443: connect: connection refused" logger="UnhandledError"
	E1016 17:45:09.784855       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.237.197:443: connect: connection refused" logger="UnhandledError"
	W1016 17:45:10.776900       1 handler_proxy.go:99] no RequestInfo found in the context
	W1016 17:45:10.776942       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 17:45:10.776993       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 17:45:10.777018       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1016 17:45:10.776993       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1016 17:45:10.778178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1016 17:45:14.811829       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 17:45:14.811902       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1016 17:45:14.811907       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 17:45:14.826382       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1016 17:46:01.340546       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60040: use of closed network connection
	E1016 17:46:01.495483       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60070: use of closed network connection
	I1016 17:46:12.571179       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1016 17:46:12.769460       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.190.236"}
	I1016 17:46:32.861119       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1016 17:48:37.744077       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.219.91"}
	
	
	==> kube-controller-manager [9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117] <==
	I1016 17:44:24.194631       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 17:44:24.194769       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 17:44:24.194792       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 17:44:24.194883       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 17:44:24.195093       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 17:44:24.195121       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 17:44:24.195132       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 17:44:24.195431       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 17:44:24.195440       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 17:44:24.195458       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 17:44:24.195579       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 17:44:24.196843       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 17:44:24.199060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 17:44:24.200228       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 17:44:24.201420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:44:24.207629       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 17:44:24.215031       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1016 17:44:54.205473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 17:44:54.205611       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1016 17:44:54.205653       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1016 17:44:54.225751       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1016 17:44:54.229246       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1016 17:44:54.306045       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:44:54.329603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 17:45:09.200422       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d] <==
	I1016 17:44:25.645489       1 server_linux.go:53] "Using iptables proxy"
	I1016 17:44:25.986615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 17:44:26.086811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 17:44:26.092821       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 17:44:26.096747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 17:44:26.262565       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 17:44:26.262744       1 server_linux.go:132] "Using iptables Proxier"
	I1016 17:44:26.271429       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 17:44:26.280773       1 server.go:527] "Version info" version="v1.34.1"
	I1016 17:44:26.280868       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:44:26.291510       1 config.go:200] "Starting service config controller"
	I1016 17:44:26.297183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 17:44:26.291704       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 17:44:26.297509       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 17:44:26.291773       1 config.go:106] "Starting endpoint slice config controller"
	I1016 17:44:26.297578       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 17:44:26.293681       1 config.go:309] "Starting node config controller"
	I1016 17:44:26.297642       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 17:44:26.297666       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 17:44:26.399333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 17:44:26.399394       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 17:44:26.399424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279] <==
	E1016 17:44:17.238364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 17:44:17.238421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:44:17.238451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 17:44:17.238520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:44:17.238570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 17:44:17.238599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:44:17.238602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:44:17.238659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 17:44:17.238694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:44:17.238697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 17:44:17.238759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 17:44:17.238808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 17:44:17.238808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:44:18.084602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:44:18.126969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:44:18.149882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:44:18.155121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 17:44:18.160389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:44:18.219495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 17:44:18.386523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:44:18.408875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:44:18.441386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 17:44:18.472404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 17:44:18.482372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1016 17:44:18.833771       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.873411    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34405bf0-2624-48f5-b663-6dbd30c77d45-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "34405bf0-2624-48f5-b663-6dbd30c77d45" (UID: "34405bf0-2624-48f5-b663-6dbd30c77d45"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.873474    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^168dc74d-aab8-11f0-9d12-baccf3d73479\") pod \"34405bf0-2624-48f5-b663-6dbd30c77d45\" (UID: \"34405bf0-2624-48f5-b663-6dbd30c77d45\") "
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.873603    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/34405bf0-2624-48f5-b663-6dbd30c77d45-gcp-creds\") on node \"addons-431183\" DevicePath \"\""
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.875949    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34405bf0-2624-48f5-b663-6dbd30c77d45-kube-api-access-r6jqf" (OuterVolumeSpecName: "kube-api-access-r6jqf") pod "34405bf0-2624-48f5-b663-6dbd30c77d45" (UID: "34405bf0-2624-48f5-b663-6dbd30c77d45"). InnerVolumeSpecName "kube-api-access-r6jqf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.876813    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^168dc74d-aab8-11f0-9d12-baccf3d73479" (OuterVolumeSpecName: "task-pv-storage") pod "34405bf0-2624-48f5-b663-6dbd30c77d45" (UID: "34405bf0-2624-48f5-b663-6dbd30c77d45"). InnerVolumeSpecName "pvc-3aa33fe4-3f27-4c36-886c-d41097634246". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.974681    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6jqf\" (UniqueName: \"kubernetes.io/projected/34405bf0-2624-48f5-b663-6dbd30c77d45-kube-api-access-r6jqf\") on node \"addons-431183\" DevicePath \"\""
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.974745    1277 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-3aa33fe4-3f27-4c36-886c-d41097634246\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^168dc74d-aab8-11f0-9d12-baccf3d73479\") on node \"addons-431183\" "
	Oct 16 17:46:55 addons-431183 kubelet[1277]: I1016 17:46:55.979561    1277 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-3aa33fe4-3f27-4c36-886c-d41097634246" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^168dc74d-aab8-11f0-9d12-baccf3d73479") on node "addons-431183"
	Oct 16 17:46:56 addons-431183 kubelet[1277]: I1016 17:46:56.075312    1277 reconciler_common.go:299] "Volume detached for volume \"pvc-3aa33fe4-3f27-4c36-886c-d41097634246\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^168dc74d-aab8-11f0-9d12-baccf3d73479\") on node \"addons-431183\" DevicePath \"\""
	Oct 16 17:46:56 addons-431183 kubelet[1277]: I1016 17:46:56.169033    1277 scope.go:117] "RemoveContainer" containerID="bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247"
	Oct 16 17:46:56 addons-431183 kubelet[1277]: I1016 17:46:56.180321    1277 scope.go:117] "RemoveContainer" containerID="bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247"
	Oct 16 17:46:56 addons-431183 kubelet[1277]: E1016 17:46:56.180822    1277 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247\": container with ID starting with bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247 not found: ID does not exist" containerID="bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247"
	Oct 16 17:46:56 addons-431183 kubelet[1277]: I1016 17:46:56.180867    1277 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247"} err="failed to get container status \"bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247\": rpc error: code = NotFound desc = could not find container \"bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247\": container with ID starting with bbc4226d96ef2f5f484ce55ad87ed69af440bfe069c68f269e8e850fbb7d0247 not found: ID does not exist"
	Oct 16 17:46:57 addons-431183 kubelet[1277]: I1016 17:46:57.543421    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kcsqr" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:46:57 addons-431183 kubelet[1277]: I1016 17:46:57.546021    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34405bf0-2624-48f5-b663-6dbd30c77d45" path="/var/lib/kubelet/pods/34405bf0-2624-48f5-b663-6dbd30c77d45/volumes"
	Oct 16 17:46:58 addons-431183 kubelet[1277]: I1016 17:46:58.543967    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r2qlf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:47:09 addons-431183 kubelet[1277]: E1016 17:47:09.577219    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-4sqn6" podUID="ff6144d2-13c8-475e-b307-4f201354f1d0"
	Oct 16 17:47:19 addons-431183 kubelet[1277]: I1016 17:47:19.594425    1277 scope.go:117] "RemoveContainer" containerID="ac735583a39afad6bd117611c7d07857e9c9a97dc3cd60ddc0cb433c6dfa660c"
	Oct 16 17:47:57 addons-431183 kubelet[1277]: I1016 17:47:57.543636    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bmbl" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:48:05 addons-431183 kubelet[1277]: I1016 17:48:05.543509    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r2qlf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:48:11 addons-431183 kubelet[1277]: I1016 17:48:11.544116    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kcsqr" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:48:37 addons-431183 kubelet[1277]: I1016 17:48:37.675503    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-4sqn6" podStartSLOduration=251.388513668 podStartE2EDuration="4m12.675480113s" podCreationTimestamp="2025-10-16 17:44:25 +0000 UTC" firstStartedPulling="2025-10-16 17:47:22.567102072 +0000 UTC m=+183.107189473" lastFinishedPulling="2025-10-16 17:47:23.854068506 +0000 UTC m=+184.394155918" observedRunningTime="2025-10-16 17:47:24.289864391 +0000 UTC m=+184.829951812" watchObservedRunningTime="2025-10-16 17:48:37.675480113 +0000 UTC m=+258.215567533"
	Oct 16 17:48:37 addons-431183 kubelet[1277]: I1016 17:48:37.705074    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26hq\" (UniqueName: \"kubernetes.io/projected/03f9592a-bf47-41eb-88ff-49982b617c64-kube-api-access-x26hq\") pod \"hello-world-app-5d498dc89-rxdwk\" (UID: \"03f9592a-bf47-41eb-88ff-49982b617c64\") " pod="default/hello-world-app-5d498dc89-rxdwk"
	Oct 16 17:48:37 addons-431183 kubelet[1277]: I1016 17:48:37.705141    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03f9592a-bf47-41eb-88ff-49982b617c64-gcp-creds\") pod \"hello-world-app-5d498dc89-rxdwk\" (UID: \"03f9592a-bf47-41eb-88ff-49982b617c64\") " pod="default/hello-world-app-5d498dc89-rxdwk"
	Oct 16 17:48:38 addons-431183 kubelet[1277]: I1016 17:48:38.559459    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-rxdwk" podStartSLOduration=1.199747792 podStartE2EDuration="1.559437214s" podCreationTimestamp="2025-10-16 17:48:37 +0000 UTC" firstStartedPulling="2025-10-16 17:48:38.007038665 +0000 UTC m=+258.547126075" lastFinishedPulling="2025-10-16 17:48:38.366728083 +0000 UTC m=+258.906815497" observedRunningTime="2025-10-16 17:48:38.558152212 +0000 UTC m=+259.098239633" watchObservedRunningTime="2025-10-16 17:48:38.559437214 +0000 UTC m=+259.099524634"
	
	
	==> storage-provisioner [a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424] <==
	W1016 17:48:13.870741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:15.873616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:15.878896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:17.882164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:17.885896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:19.889211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:19.892853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:21.895953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:21.901199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:23.904368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:23.908194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:25.911272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:25.916401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:27.919848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:27.924595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:29.927650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:29.931350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:31.934319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:31.939188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:33.942357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:33.946063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:35.948599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:35.952582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:37.955792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:48:37.960760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-431183 -n addons-431183
helpers_test.go:269: (dbg) Run:  kubectl --context addons-431183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q: exit status 1 (54.702849ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-74xz8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-54q7q" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (226.570956ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:48:40.191418   28314 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:48:40.191678   28314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:48:40.191688   28314 out.go:374] Setting ErrFile to fd 2...
	I1016 17:48:40.191692   28314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:48:40.191920   28314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:48:40.192187   28314 mustload.go:65] Loading cluster: addons-431183
	I1016 17:48:40.192519   28314 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:48:40.192534   28314 addons.go:606] checking whether the cluster is paused
	I1016 17:48:40.192609   28314 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:48:40.192620   28314 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:48:40.193029   28314 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:48:40.210995   28314 ssh_runner.go:195] Run: systemctl --version
	I1016 17:48:40.211055   28314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:48:40.229003   28314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:48:40.325385   28314 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:48:40.325482   28314 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:48:40.354106   28314 cri.go:89] found id: "e4db5246739f60bb25f1a534571421aa13c11b8ba35febba084c918bc19bdb01"
	I1016 17:48:40.354129   28314 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:48:40.354135   28314 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:48:40.354139   28314 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:48:40.354143   28314 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:48:40.354148   28314 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:48:40.354151   28314 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:48:40.354155   28314 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:48:40.354159   28314 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:48:40.354170   28314 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:48:40.354174   28314 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:48:40.354178   28314 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:48:40.354182   28314 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:48:40.354187   28314 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:48:40.354191   28314 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:48:40.354207   28314 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:48:40.354217   28314 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:48:40.354223   28314 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:48:40.354227   28314 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:48:40.354231   28314 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:48:40.354235   28314 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:48:40.354239   28314 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:48:40.354244   28314 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:48:40.354250   28314 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:48:40.354255   28314 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:48:40.354260   28314 cri.go:89] found id: ""
	I1016 17:48:40.354305   28314 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:48:40.368560   28314 out.go:203] 
	W1016 17:48:40.369863   28314 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:48:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:48:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:48:40.369894   28314 out.go:285] * 
	* 
	W1016 17:48:40.372909   28314 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:48:40.374183   28314 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable ingress --alsologtostderr -v=1: exit status 11 (228.788971ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:48:40.421771   28377 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:48:40.422041   28377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:48:40.422052   28377 out.go:374] Setting ErrFile to fd 2...
	I1016 17:48:40.422056   28377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:48:40.422260   28377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:48:40.422506   28377 mustload.go:65] Loading cluster: addons-431183
	I1016 17:48:40.422857   28377 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:48:40.422871   28377 addons.go:606] checking whether the cluster is paused
	I1016 17:48:40.422952   28377 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:48:40.422962   28377 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:48:40.423351   28377 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:48:40.440569   28377 ssh_runner.go:195] Run: systemctl --version
	I1016 17:48:40.440617   28377 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:48:40.457923   28377 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:48:40.554903   28377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:48:40.554984   28377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:48:40.583093   28377 cri.go:89] found id: "e4db5246739f60bb25f1a534571421aa13c11b8ba35febba084c918bc19bdb01"
	I1016 17:48:40.583115   28377 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:48:40.583121   28377 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:48:40.583126   28377 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:48:40.583130   28377 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:48:40.583134   28377 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:48:40.583138   28377 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:48:40.583142   28377 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:48:40.583146   28377 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:48:40.583156   28377 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:48:40.583161   28377 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:48:40.583166   28377 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:48:40.583171   28377 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:48:40.583176   28377 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:48:40.583181   28377 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:48:40.583196   28377 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:48:40.583206   28377 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:48:40.583213   28377 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:48:40.583216   28377 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:48:40.583220   28377 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:48:40.583223   28377 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:48:40.583227   28377 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:48:40.583230   28377 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:48:40.583234   28377 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:48:40.583237   28377 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:48:40.583241   28377 cri.go:89] found id: ""
	I1016 17:48:40.583288   28377 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:48:40.597219   28377 out.go:203] 
	W1016 17:48:40.598451   28377 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:48:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:48:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:48:40.598471   28377 out.go:285] * 
	* 
	W1016 17:48:40.601508   28377 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:48:40.602826   28377 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-rwgd7" [134b52a9-cbfd-4c96-8af6-17d8e3de3ef6] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003117585s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (237.332918ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:20.316434   24869 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:20.316688   24869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.316697   24869 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:20.316701   24869 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.316885   24869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:20.317163   24869 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:20.317469   24869 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.317482   24869 addons.go:606] checking whether the cluster is paused
	I1016 17:46:20.317557   24869 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.317568   24869 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:20.317947   24869 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:20.336222   24869 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:20.336265   24869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:20.354845   24869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:20.451466   24869 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:20.451545   24869 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:20.482543   24869 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:20.482563   24869 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:20.482566   24869 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:20.482569   24869 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:20.482572   24869 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:20.482575   24869 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:20.482578   24869 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:20.482580   24869 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:20.482582   24869 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:20.482593   24869 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:20.482595   24869 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:20.482598   24869 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:20.482600   24869 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:20.482603   24869 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:20.482605   24869 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:20.482611   24869 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:20.482614   24869 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:20.482618   24869 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:20.482620   24869 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:20.482622   24869 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:20.482625   24869 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:20.482633   24869 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:20.482635   24869 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:20.482638   24869 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:20.482640   24869 cri.go:89] found id: ""
	I1016 17:46:20.482675   24869 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:20.499573   24869 out.go:203] 
	W1016 17:46:20.500996   24869 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:20.501026   24869 out.go:285] * 
	* 
	W1016 17:46:20.504466   24869 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:20.506265   24869 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.453954ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004088864s
addons_test.go:463: (dbg) Run:  kubectl --context addons-431183 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (248.295641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:12.117165   23390 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:12.117302   23390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:12.117312   23390 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:12.117316   23390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:12.117584   23390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:12.117872   23390 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:12.118278   23390 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:12.118299   23390 addons.go:606] checking whether the cluster is paused
	I1016 17:46:12.118408   23390 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:12.118423   23390 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:12.119204   23390 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:12.139540   23390 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:12.139592   23390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:12.159147   23390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:12.258460   23390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:12.258557   23390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:12.289737   23390 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:12.289761   23390 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:12.289767   23390 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:12.289772   23390 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:12.289777   23390 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:12.289781   23390 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:12.289784   23390 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:12.289787   23390 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:12.289791   23390 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:12.289797   23390 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:12.289800   23390 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:12.289802   23390 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:12.289805   23390 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:12.289808   23390 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:12.289818   23390 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:12.289827   23390 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:12.289830   23390 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:12.289834   23390 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:12.289837   23390 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:12.289839   23390 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:12.289843   23390 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:12.289846   23390 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:12.289848   23390 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:12.289851   23390 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:12.289853   23390 cri.go:89] found id: ""
	I1016 17:46:12.289893   23390 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:12.304566   23390 out.go:203] 
	W1016 17:46:12.305973   23390 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:12.306009   23390 out.go:285] * 
	* 
	W1016 17:46:12.309273   23390 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:12.310880   23390 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1016 17:46:09.503396   12375 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1016 17:46:09.506779   12375 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1016 17:46:09.506801   12375 kapi.go:107] duration metric: took 3.419006ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.428651ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-431183 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-431183 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [f84d72fb-c935-4893-8a89-0cde5225fa83] Pending
helpers_test.go:352: "task-pv-pod" [f84d72fb-c935-4893-8a89-0cde5225fa83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [f84d72fb-c935-4893-8a89-0cde5225fa83] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002899618s
addons_test.go:572: (dbg) Run:  kubectl --context addons-431183 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-431183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-431183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-431183 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-431183 delete pod task-pv-pod: (1.179738846s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-431183 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-431183 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-431183 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [34405bf0-2624-48f5-b663-6dbd30c77d45] Pending
helpers_test.go:352: "task-pv-pod-restore" [34405bf0-2624-48f5-b663-6dbd30c77d45] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [34405bf0-2624-48f5-b663-6dbd30c77d45] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003456771s
addons_test.go:614: (dbg) Run:  kubectl --context addons-431183 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-431183 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-431183 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (237.856216ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:56.560284   26107 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:56.560569   26107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:56.560579   26107 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:56.560583   26107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:56.560797   26107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:56.561071   26107 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:56.561416   26107 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:56.561431   26107 addons.go:606] checking whether the cluster is paused
	I1016 17:46:56.561507   26107 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:56.561519   26107 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:56.561899   26107 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:56.582022   26107 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:56.582085   26107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:56.601216   26107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:56.699553   26107 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:56.699625   26107 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:56.729893   26107 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:56.729917   26107 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:56.729923   26107 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:56.729928   26107 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:56.729932   26107 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:56.729937   26107 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:56.729941   26107 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:56.729946   26107 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:56.729950   26107 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:56.729959   26107 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:56.729963   26107 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:56.729967   26107 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:56.729969   26107 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:56.729972   26107 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:56.729974   26107 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:56.729989   26107 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:56.729995   26107 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:56.730000   26107 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:56.730003   26107 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:56.730005   26107 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:56.730007   26107 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:56.730010   26107 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:56.730012   26107 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:56.730014   26107 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:56.730017   26107 cri.go:89] found id: ""
	I1016 17:46:56.730055   26107 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:56.744422   26107 out.go:203] 
	W1016 17:46:56.745670   26107 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:56.745695   26107 out.go:285] * 
	* 
	W1016 17:46:56.748695   26107 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:56.750030   26107 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (234.983517ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:56.797777   26167 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:56.798047   26167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:56.798056   26167 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:56.798060   26167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:56.798265   26167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:56.798490   26167 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:56.798893   26167 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:56.798910   26167 addons.go:606] checking whether the cluster is paused
	I1016 17:46:56.798989   26167 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:56.799001   26167 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:56.799356   26167 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:56.817999   26167 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:56.818041   26167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:56.835565   26167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:56.933476   26167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:56.933543   26167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:56.964335   26167 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:56.964353   26167 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:56.964356   26167 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:56.964359   26167 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:56.964362   26167 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:56.964365   26167 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:56.964368   26167 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:56.964370   26167 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:56.964373   26167 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:56.964378   26167 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:56.964382   26167 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:56.964386   26167 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:56.964390   26167 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:56.964394   26167 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:56.964399   26167 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:56.964421   26167 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:56.964431   26167 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:56.964437   26167 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:56.964440   26167 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:56.964442   26167 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:56.964445   26167 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:56.964447   26167 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:56.964449   26167 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:56.964452   26167 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:56.964455   26167 cri.go:89] found id: ""
	I1016 17:46:56.964491   26167 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:56.978867   26167 out.go:203] 
	W1016 17:46:56.980480   26167 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:56.980500   26167 out.go:285] * 
	* 
	W1016 17:46:56.983750   26167 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:56.985364   26167 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (47.49s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-431183 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-431183 --alsologtostderr -v=1: exit status 11 (240.829315ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:01.784206   22079 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:01.784507   22079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:01.784518   22079 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:01.784525   22079 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:01.784761   22079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:01.785036   22079 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:01.785414   22079 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:01.785434   22079 addons.go:606] checking whether the cluster is paused
	I1016 17:46:01.785535   22079 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:01.785551   22079 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:01.785955   22079 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:01.804318   22079 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:01.804374   22079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:01.822920   22079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:01.921179   22079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:01.921265   22079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:01.953009   22079 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:01.953033   22079 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:01.953038   22079 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:01.953043   22079 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:01.953047   22079 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:01.953051   22079 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:01.953055   22079 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:01.953059   22079 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:01.953063   22079 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:01.953070   22079 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:01.953075   22079 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:01.953079   22079 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:01.953083   22079 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:01.953087   22079 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:01.953091   22079 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:01.953106   22079 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:01.953110   22079 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:01.953116   22079 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:01.953120   22079 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:01.953125   22079 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:01.953129   22079 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:01.953133   22079 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:01.953137   22079 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:01.953141   22079 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:01.953144   22079 cri.go:89] found id: ""
	I1016 17:46:01.953181   22079 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:01.967329   22079 out.go:203] 
	W1016 17:46:01.968836   22079 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:01.968857   22079 out.go:285] * 
	* 
	W1016 17:46:01.971820   22079 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:01.973251   22079 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-431183 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-431183
helpers_test.go:243: (dbg) docker inspect addons-431183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06",
	        "Created": "2025-10-16T17:44:06.387675641Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T17:44:06.423998189Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/hostname",
	        "HostsPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/hosts",
	        "LogPath": "/var/lib/docker/containers/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06/895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06-json.log",
	        "Name": "/addons-431183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-431183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-431183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "895cc9c3f83025006ec3ea11bf2fd98c009ef5fe1d2b7e3e9fe3fbbc1ec18d06",
	                "LowerDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aa169f083b306b92b8ffc6a8df14e68bdd567caa0c4222bec847e7cca2f2c769/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-431183",
	                "Source": "/var/lib/docker/volumes/addons-431183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-431183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-431183",
	                "name.minikube.sigs.k8s.io": "addons-431183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f8da9d1ff89a6310d47d61860ef73957be2385ff63316af8ca19c0f0c40b565",
	            "SandboxKey": "/var/run/docker/netns/8f8da9d1ff89",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-431183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:08:bb:cf:90:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6e08f5c6d684788b8c0cacd5c9a403d01405022a8e87923ecb8c1b8d83c9dfa7",
	                    "EndpointID": "60a2e38c312dc4d2d88c9bbcd02814052f9c1ca403726fc3d39d5bef4a98fa9b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-431183",
	                        "895cc9c3f830"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-431183 -n addons-431183
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-431183 logs -n 25: (1.125198852s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-101994 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-101994   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ delete  │ -p download-only-101994                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-101994   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ start   │ -o=json --download-only -p download-only-309311 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-309311   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ delete  │ -p download-only-309311                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-309311   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ delete  │ -p download-only-101994                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-101994   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ delete  │ -p download-only-309311                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-309311   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ start   │ --download-only -p download-docker-369292 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-369292 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ -p download-docker-369292                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-369292 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ start   │ --download-only -p binary-mirror-905459 --alsologtostderr --binary-mirror http://127.0.0.1:34337 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-905459   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ -p binary-mirror-905459                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-905459   │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ addons  │ enable dashboard -p addons-431183                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ addons  │ disable dashboard -p addons-431183                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ start   │ -p addons-431183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:45 UTC │
	│ addons  │ addons-431183 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:45 UTC │                     │
	│ addons  │ addons-431183 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	│ addons  │ enable headlamp -p addons-431183 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-431183          │ jenkins │ v1.37.0 │ 16 Oct 25 17:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:43:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:43:41.770608   13712 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:43:41.770731   13712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:41.770739   13712 out.go:374] Setting ErrFile to fd 2...
	I1016 17:43:41.770746   13712 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:41.770947   13712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:43:41.771506   13712 out.go:368] Setting JSON to false
	I1016 17:43:41.772328   13712 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1570,"bootTime":1760635052,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:43:41.772415   13712 start.go:141] virtualization: kvm guest
	I1016 17:43:41.774284   13712 out.go:179] * [addons-431183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:43:41.775903   13712 notify.go:220] Checking for updates...
	I1016 17:43:41.775933   13712 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:43:41.777672   13712 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:43:41.779109   13712 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:43:41.780667   13712 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:43:41.782237   13712 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:43:41.783735   13712 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:43:41.785258   13712 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:43:41.808727   13712 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:43:41.808805   13712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:41.867519   13712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-16 17:43:41.858599982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:41.867626   13712 docker.go:318] overlay module found
	I1016 17:43:41.869454   13712 out.go:179] * Using the docker driver based on user configuration
	I1016 17:43:41.870828   13712 start.go:305] selected driver: docker
	I1016 17:43:41.870843   13712 start.go:925] validating driver "docker" against <nil>
	I1016 17:43:41.870854   13712 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:43:41.871372   13712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:41.926126   13712 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-16 17:43:41.915354408 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:41.926325   13712 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:43:41.926621   13712 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:43:41.928923   13712 out.go:179] * Using Docker driver with root privileges
	I1016 17:43:41.930221   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:43:41.930287   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:43:41.930304   13712 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 17:43:41.930379   13712 start.go:349] cluster config:
	{Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1016 17:43:41.931782   13712 out.go:179] * Starting "addons-431183" primary control-plane node in "addons-431183" cluster
	I1016 17:43:41.933210   13712 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 17:43:41.934674   13712 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 17:43:41.935945   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:43:41.935988   13712 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 17:43:41.936001   13712 cache.go:58] Caching tarball of preloaded images
	I1016 17:43:41.936094   13712 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 17:43:41.936101   13712 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 17:43:41.936108   13712 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 17:43:41.936411   13712 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json ...
	I1016 17:43:41.936440   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json: {Name:mk2eceda1a8c022755b511272da50341dbc13339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:43:41.952824   13712 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 to local cache
	I1016 17:43:41.952965   13712 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory
	I1016 17:43:41.952989   13712 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local cache directory, skipping pull
	I1016 17:43:41.952996   13712 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in cache, skipping pull
	I1016 17:43:41.953006   13712 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 as a tarball
	I1016 17:43:41.953014   13712 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from local cache
	I1016 17:43:54.602209   13712 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 from cached tarball
	I1016 17:43:54.602242   13712 cache.go:232] Successfully downloaded all kic artifacts
	I1016 17:43:54.602280   13712 start.go:360] acquireMachinesLock for addons-431183: {Name:mkc268cc7edc28cd51d10e7128f020d2864cbc75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 17:43:54.602387   13712 start.go:364] duration metric: took 87.699µs to acquireMachinesLock for "addons-431183"
	I1016 17:43:54.602410   13712 start.go:93] Provisioning new machine with config: &{Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:43:54.602477   13712 start.go:125] createHost starting for "" (driver="docker")
	I1016 17:43:54.605014   13712 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1016 17:43:54.605215   13712 start.go:159] libmachine.API.Create for "addons-431183" (driver="docker")
	I1016 17:43:54.605247   13712 client.go:168] LocalClient.Create starting
	I1016 17:43:54.605353   13712 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 17:43:55.144917   13712 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 17:43:55.343026   13712 cli_runner.go:164] Run: docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 17:43:55.362048   13712 cli_runner.go:211] docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 17:43:55.362153   13712 network_create.go:284] running [docker network inspect addons-431183] to gather additional debugging logs...
	I1016 17:43:55.362174   13712 cli_runner.go:164] Run: docker network inspect addons-431183
	W1016 17:43:55.378481   13712 cli_runner.go:211] docker network inspect addons-431183 returned with exit code 1
	I1016 17:43:55.378505   13712 network_create.go:287] error running [docker network inspect addons-431183]: docker network inspect addons-431183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-431183 not found
	I1016 17:43:55.378520   13712 network_create.go:289] output of [docker network inspect addons-431183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-431183 not found
	
	** /stderr **
	I1016 17:43:55.378617   13712 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 17:43:55.396224   13712 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001508700}
	I1016 17:43:55.396271   13712 network_create.go:124] attempt to create docker network addons-431183 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1016 17:43:55.396314   13712 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-431183 addons-431183
	I1016 17:43:55.453270   13712 network_create.go:108] docker network addons-431183 192.168.49.0/24 created
	I1016 17:43:55.453300   13712 kic.go:121] calculated static IP "192.168.49.2" for the "addons-431183" container
	I1016 17:43:55.453355   13712 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 17:43:55.470121   13712 cli_runner.go:164] Run: docker volume create addons-431183 --label name.minikube.sigs.k8s.io=addons-431183 --label created_by.minikube.sigs.k8s.io=true
	I1016 17:43:55.489442   13712 oci.go:103] Successfully created a docker volume addons-431183
	I1016 17:43:55.489529   13712 cli_runner.go:164] Run: docker run --rm --name addons-431183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --entrypoint /usr/bin/test -v addons-431183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 17:44:01.882338   13712 cli_runner.go:217] Completed: docker run --rm --name addons-431183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --entrypoint /usr/bin/test -v addons-431183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib: (6.392770221s)
	I1016 17:44:01.882373   13712 oci.go:107] Successfully prepared a docker volume addons-431183
	I1016 17:44:01.882392   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:01.882411   13712 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 17:44:01.882467   13712 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-431183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 17:44:06.310204   13712 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-431183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.427693666s)
	I1016 17:44:06.310238   13712 kic.go:203] duration metric: took 4.427823614s to extract preloaded images to volume ...
	W1016 17:44:06.310336   13712 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 17:44:06.310369   13712 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 17:44:06.310404   13712 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 17:44:06.370383   13712 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-431183 --name addons-431183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-431183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-431183 --network addons-431183 --ip 192.168.49.2 --volume addons-431183:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 17:44:06.684259   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Running}}
	I1016 17:44:06.703670   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:06.723556   13712 cli_runner.go:164] Run: docker exec addons-431183 stat /var/lib/dpkg/alternatives/iptables
	I1016 17:44:06.773748   13712 oci.go:144] the created container "addons-431183" has a running status.
	I1016 17:44:06.773779   13712 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa...
	I1016 17:44:07.012015   13712 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 17:44:07.043089   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:07.064648   13712 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 17:44:07.064666   13712 kic_runner.go:114] Args: [docker exec --privileged addons-431183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 17:44:07.116507   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:07.135217   13712 machine.go:93] provisionDockerMachine start ...
	I1016 17:44:07.135324   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.154708   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.155094   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.155116   13712 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 17:44:07.293900   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-431183
	
	I1016 17:44:07.293925   13712 ubuntu.go:182] provisioning hostname "addons-431183"
	I1016 17:44:07.294016   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.313273   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.313509   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.313526   13712 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-431183 && echo "addons-431183" | sudo tee /etc/hostname
	I1016 17:44:07.460181   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-431183
	
	I1016 17:44:07.460245   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.477880   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.478102   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.478119   13712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-431183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-431183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-431183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 17:44:07.614372   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 17:44:07.614397   13712 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 17:44:07.614429   13712 ubuntu.go:190] setting up certificates
	I1016 17:44:07.614442   13712 provision.go:84] configureAuth start
	I1016 17:44:07.614494   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:07.631800   13712 provision.go:143] copyHostCerts
	I1016 17:44:07.631874   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 17:44:07.631978   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 17:44:07.632040   13712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 17:44:07.632092   13712 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.addons-431183 san=[127.0.0.1 192.168.49.2 addons-431183 localhost minikube]
	I1016 17:44:07.801457   13712 provision.go:177] copyRemoteCerts
	I1016 17:44:07.801514   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 17:44:07.801547   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.820035   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:07.916910   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 17:44:07.936393   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 17:44:07.953787   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 17:44:07.971644   13712 provision.go:87] duration metric: took 357.188788ms to configureAuth
	I1016 17:44:07.971674   13712 ubuntu.go:206] setting minikube options for container-runtime
	I1016 17:44:07.971894   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:07.972120   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:07.989408   13712 main.go:141] libmachine: Using SSH client type: native
	I1016 17:44:07.989645   13712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1016 17:44:07.989672   13712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 17:44:08.234732   13712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 17:44:08.234756   13712 machine.go:96] duration metric: took 1.099516456s to provisionDockerMachine
	I1016 17:44:08.234772   13712 client.go:171] duration metric: took 13.629513967s to LocalClient.Create
	I1016 17:44:08.234794   13712 start.go:167] duration metric: took 13.629578272s to libmachine.API.Create "addons-431183"
	I1016 17:44:08.234806   13712 start.go:293] postStartSetup for "addons-431183" (driver="docker")
	I1016 17:44:08.234819   13712 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 17:44:08.234877   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 17:44:08.234910   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.252434   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.351659   13712 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 17:44:08.355376   13712 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 17:44:08.355406   13712 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 17:44:08.355423   13712 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 17:44:08.355480   13712 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 17:44:08.355510   13712 start.go:296] duration metric: took 120.696481ms for postStartSetup
	I1016 17:44:08.355846   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:08.373580   13712 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/config.json ...
	I1016 17:44:08.373905   13712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 17:44:08.373956   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.392861   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.486913   13712 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 17:44:08.491333   13712 start.go:128] duration metric: took 13.888844233s to createHost
	I1016 17:44:08.491353   13712 start.go:83] releasing machines lock for "addons-431183", held for 13.888955087s
	I1016 17:44:08.491424   13712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-431183
	I1016 17:44:08.508740   13712 ssh_runner.go:195] Run: cat /version.json
	I1016 17:44:08.508788   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.508795   13712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 17:44:08.508868   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:08.527887   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.528689   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:08.675391   13712 ssh_runner.go:195] Run: systemctl --version
	I1016 17:44:08.681574   13712 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 17:44:08.715477   13712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 17:44:08.719994   13712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 17:44:08.720059   13712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 17:44:08.746341   13712 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 17:44:08.746381   13712 start.go:495] detecting cgroup driver to use...
	I1016 17:44:08.746419   13712 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 17:44:08.746461   13712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 17:44:08.762274   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 17:44:08.774871   13712 docker.go:218] disabling cri-docker service (if available) ...
	I1016 17:44:08.774935   13712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 17:44:08.791306   13712 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 17:44:08.808595   13712 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 17:44:08.887688   13712 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 17:44:08.975071   13712 docker.go:234] disabling docker service ...
	I1016 17:44:08.975127   13712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 17:44:08.992698   13712 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 17:44:09.005339   13712 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 17:44:09.090684   13712 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 17:44:09.171316   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 17:44:09.183781   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 17:44:09.197470   13712 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 17:44:09.197611   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.208219   13712 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 17:44:09.208282   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.217455   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.226138   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.234625   13712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 17:44:09.242432   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.251207   13712 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.264728   13712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 17:44:09.273857   13712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 17:44:09.281301   13712 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1016 17:44:09.281372   13712 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1016 17:44:09.293924   13712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 17:44:09.301896   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:09.379492   13712 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 17:44:09.482009   13712 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 17:44:09.482085   13712 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 17:44:09.486056   13712 start.go:563] Will wait 60s for crictl version
	I1016 17:44:09.486108   13712 ssh_runner.go:195] Run: which crictl
	I1016 17:44:09.490123   13712 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 17:44:09.513323   13712 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 17:44:09.513438   13712 ssh_runner.go:195] Run: crio --version
	I1016 17:44:09.540804   13712 ssh_runner.go:195] Run: crio --version
	I1016 17:44:09.570302   13712 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 17:44:09.571458   13712 cli_runner.go:164] Run: docker network inspect addons-431183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 17:44:09.588604   13712 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1016 17:44:09.592540   13712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:09.602431   13712 kubeadm.go:883] updating cluster {Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 17:44:09.602533   13712 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 17:44:09.602571   13712 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:09.633269   13712 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 17:44:09.633289   13712 crio.go:433] Images already preloaded, skipping extraction
	I1016 17:44:09.633333   13712 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 17:44:09.659117   13712 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 17:44:09.659136   13712 cache_images.go:85] Images are preloaded, skipping loading
	I1016 17:44:09.659143   13712 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1016 17:44:09.659226   13712 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-431183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 17:44:09.659297   13712 ssh_runner.go:195] Run: crio config
	I1016 17:44:09.702992   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:44:09.703028   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:44:09.703050   13712 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 17:44:09.703081   13712 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-431183 NodeName:addons-431183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 17:44:09.703225   13712 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-431183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 17:44:09.703296   13712 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 17:44:09.711750   13712 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 17:44:09.711814   13712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 17:44:09.719629   13712 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 17:44:09.733530   13712 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 17:44:09.749527   13712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1016 17:44:09.762642   13712 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1016 17:44:09.766554   13712 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 17:44:09.776751   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:09.858146   13712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:44:09.882405   13712 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183 for IP: 192.168.49.2
	I1016 17:44:09.882430   13712 certs.go:195] generating shared ca certs ...
	I1016 17:44:09.882483   13712 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:09.882606   13712 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 17:44:10.050177   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt ...
	I1016 17:44:10.050205   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt: {Name:mk92ca197d451ca11c78b9aaeedc706e4d79a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.050374   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key ...
	I1016 17:44:10.050387   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key: {Name:mkfcc2d9255fa5ee2fe177136fa6ab557b1c90ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.050459   13712 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 17:44:10.302775   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt ...
	I1016 17:44:10.302800   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt: {Name:mk9315cbbc6404c054735a0ebde220e418cbb122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.302949   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key ...
	I1016 17:44:10.302959   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key: {Name:mk07d83703f861966f1139378a1238cb3c83e885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.303038   13712 certs.go:257] generating profile certs ...
	I1016 17:44:10.303090   13712 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key
	I1016 17:44:10.303103   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt with IP's: []
	I1016 17:44:10.788204   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt ...
	I1016 17:44:10.788230   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: {Name:mk77b1769a1b00a9f7b022011c484dd24ac8fc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.788418   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key ...
	I1016 17:44:10.788434   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.key: {Name:mk86d2ac99c89452cff09866d24819974184a017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:10.788547   13712 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a
	I1016 17:44:10.788567   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1016 17:44:11.379699   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a ...
	I1016 17:44:11.379732   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a: {Name:mk83da1270ddee706b29dbd3e821b6dc7c5d1c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.379937   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a ...
	I1016 17:44:11.379954   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a: {Name:mk01881853497ce21b9ef171c80bc0ef9a544baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.380054   13712 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt.1bf9108a -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt
	I1016 17:44:11.380141   13712 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key.1bf9108a -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key
	I1016 17:44:11.380189   13712 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key
	I1016 17:44:11.380206   13712 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt with IP's: []
	I1016 17:44:11.541247   13712 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt ...
	I1016 17:44:11.541274   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt: {Name:mk82af69c2a54723d8ae2b40aeb6d923a717f681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.541458   13712 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key ...
	I1016 17:44:11.541472   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key: {Name:mkbadf7274621992992f90af2262fab4e928caba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:11.541674   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 17:44:11.541709   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 17:44:11.541750   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 17:44:11.541774   13712 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 17:44:11.542330   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 17:44:11.559982   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 17:44:11.577316   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 17:44:11.593993   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 17:44:11.611059   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 17:44:11.628095   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 17:44:11.645312   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 17:44:11.661881   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 17:44:11.678697   13712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 17:44:11.698627   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 17:44:11.711515   13712 ssh_runner.go:195] Run: openssl version
	I1016 17:44:11.717707   13712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 17:44:11.729023   13712 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.732964   13712 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.733010   13712 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 17:44:11.766911   13712 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 17:44:11.775857   13712 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 17:44:11.779305   13712 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 17:44:11.779350   13712 kubeadm.go:400] StartCluster: {Name:addons-431183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-431183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:44:11.779430   13712 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:44:11.779469   13712 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:44:11.807360   13712 cri.go:89] found id: ""
	I1016 17:44:11.807418   13712 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 17:44:11.815412   13712 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 17:44:11.823439   13712 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 17:44:11.823489   13712 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 17:44:11.831528   13712 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 17:44:11.831549   13712 kubeadm.go:157] found existing configuration files:
	
	I1016 17:44:11.831590   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 17:44:11.839040   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 17:44:11.839095   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 17:44:11.846663   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 17:44:11.854013   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 17:44:11.854057   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 17:44:11.861132   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 17:44:11.868271   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 17:44:11.868325   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 17:44:11.875406   13712 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 17:44:11.882610   13712 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 17:44:11.882658   13712 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 17:44:11.890747   13712 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 17:44:11.926102   13712 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 17:44:11.926219   13712 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 17:44:11.945708   13712 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 17:44:11.945815   13712 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 17:44:11.945869   13712 kubeadm.go:318] OS: Linux
	I1016 17:44:11.945945   13712 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 17:44:11.946004   13712 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 17:44:11.946069   13712 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 17:44:11.946124   13712 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 17:44:11.946163   13712 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 17:44:11.946207   13712 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 17:44:11.946246   13712 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 17:44:11.946284   13712 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 17:44:12.001137   13712 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 17:44:12.001304   13712 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 17:44:12.001454   13712 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 17:44:12.008198   13712 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 17:44:12.010211   13712 out.go:252]   - Generating certificates and keys ...
	I1016 17:44:12.010316   13712 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 17:44:12.010429   13712 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 17:44:12.067438   13712 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 17:44:12.225103   13712 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 17:44:12.315893   13712 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 17:44:12.422635   13712 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 17:44:12.519153   13712 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 17:44:12.519319   13712 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-431183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 17:44:12.688833   13712 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 17:44:12.689042   13712 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-431183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1016 17:44:12.793210   13712 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 17:44:12.997659   13712 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 17:44:13.489931   13712 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 17:44:13.490015   13712 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 17:44:13.682059   13712 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 17:44:13.781974   13712 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 17:44:13.836506   13712 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 17:44:14.030302   13712 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 17:44:14.269068   13712 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 17:44:14.269513   13712 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 17:44:14.274381   13712 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 17:44:14.275878   13712 out.go:252]   - Booting up control plane ...
	I1016 17:44:14.276012   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 17:44:14.276116   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 17:44:14.276794   13712 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 17:44:14.290258   13712 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 17:44:14.290348   13712 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 17:44:14.297285   13712 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 17:44:14.297424   13712 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 17:44:14.297483   13712 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 17:44:14.398077   13712 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 17:44:14.398224   13712 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 17:44:14.899648   13712 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.723117ms
	I1016 17:44:14.903557   13712 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 17:44:14.903683   13712 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1016 17:44:14.903843   13712 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 17:44:14.903971   13712 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 17:44:16.276508   13712 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.372870402s
	I1016 17:44:17.239387   13712 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.335846107s
	I1016 17:44:18.905124   13712 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001486271s
	I1016 17:44:18.915901   13712 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 17:44:18.926842   13712 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 17:44:18.938158   13712 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 17:44:18.938422   13712 kubeadm.go:318] [mark-control-plane] Marking the node addons-431183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 17:44:18.945911   13712 kubeadm.go:318] [bootstrap-token] Using token: s8h074.a5lym059it9fzll8
	I1016 17:44:18.947599   13712 out.go:252]   - Configuring RBAC rules ...
	I1016 17:44:18.947781   13712 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 17:44:18.950603   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 17:44:18.955766   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 17:44:18.958328   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 17:44:18.961586   13712 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 17:44:18.964156   13712 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 17:44:19.310856   13712 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 17:44:19.726616   13712 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 17:44:20.311237   13712 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 17:44:20.312037   13712 kubeadm.go:318] 
	I1016 17:44:20.312132   13712 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 17:44:20.312147   13712 kubeadm.go:318] 
	I1016 17:44:20.312211   13712 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 17:44:20.312242   13712 kubeadm.go:318] 
	I1016 17:44:20.312290   13712 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 17:44:20.312376   13712 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 17:44:20.312480   13712 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 17:44:20.312497   13712 kubeadm.go:318] 
	I1016 17:44:20.312580   13712 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 17:44:20.312589   13712 kubeadm.go:318] 
	I1016 17:44:20.312657   13712 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 17:44:20.312666   13712 kubeadm.go:318] 
	I1016 17:44:20.312757   13712 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 17:44:20.312859   13712 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 17:44:20.312952   13712 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 17:44:20.312961   13712 kubeadm.go:318] 
	I1016 17:44:20.313057   13712 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 17:44:20.313163   13712 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 17:44:20.313178   13712 kubeadm.go:318] 
	I1016 17:44:20.313289   13712 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token s8h074.a5lym059it9fzll8 \
	I1016 17:44:20.313415   13712 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 17:44:20.313449   13712 kubeadm.go:318] 	--control-plane 
	I1016 17:44:20.313457   13712 kubeadm.go:318] 
	I1016 17:44:20.313562   13712 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 17:44:20.313570   13712 kubeadm.go:318] 
	I1016 17:44:20.313680   13712 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token s8h074.a5lym059it9fzll8 \
	I1016 17:44:20.313834   13712 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 17:44:20.315959   13712 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 17:44:20.316124   13712 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 17:44:20.316162   13712 cni.go:84] Creating CNI manager for ""
	I1016 17:44:20.316177   13712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 17:44:20.317927   13712 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 17:44:20.319604   13712 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 17:44:20.323893   13712 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 17:44:20.323912   13712 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 17:44:20.337400   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 17:44:20.545748   13712 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 17:44:20.545820   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:20.545832   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-431183 minikube.k8s.io/updated_at=2025_10_16T17_44_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=addons-431183 minikube.k8s.io/primary=true
	I1016 17:44:20.558072   13712 ops.go:34] apiserver oom_adj: -16
	I1016 17:44:20.634912   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:21.135411   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:21.635999   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:22.135021   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:22.635829   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:23.135264   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:23.635279   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:24.135278   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:24.635299   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:25.135855   13712 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 17:44:25.202737   13712 kubeadm.go:1113] duration metric: took 4.656956624s to wait for elevateKubeSystemPrivileges
	I1016 17:44:25.202775   13712 kubeadm.go:402] duration metric: took 13.423428318s to StartCluster
	I1016 17:44:25.202793   13712 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:25.202893   13712 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:44:25.203356   13712 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 17:44:25.203565   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 17:44:25.203581   13712 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 17:44:25.203667   13712 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1016 17:44:25.203796   13712 addons.go:69] Setting yakd=true in profile "addons-431183"
	I1016 17:44:25.203819   13712 addons.go:238] Setting addon yakd=true in "addons-431183"
	I1016 17:44:25.203819   13712 addons.go:69] Setting default-storageclass=true in profile "addons-431183"
	I1016 17:44:25.203845   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:25.203862   13712 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-431183"
	I1016 17:44:25.203878   13712 addons.go:69] Setting ingress=true in profile "addons-431183"
	I1016 17:44:25.203883   13712 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-431183"
	I1016 17:44:25.203850   13712 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-431183"
	I1016 17:44:25.203899   13712 addons.go:69] Setting ingress-dns=true in profile "addons-431183"
	I1016 17:44:25.203909   13712 addons.go:238] Setting addon ingress-dns=true in "addons-431183"
	I1016 17:44:25.203918   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203931   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203931   13712 addons.go:69] Setting registry-creds=true in profile "addons-431183"
	I1016 17:44:25.203954   13712 addons.go:238] Setting addon registry-creds=true in "addons-431183"
	I1016 17:44:25.203990   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.204073   13712 addons.go:69] Setting metrics-server=true in profile "addons-431183"
	I1016 17:44:25.204080   13712 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-431183"
	I1016 17:44:25.204098   13712 addons.go:238] Setting addon metrics-server=true in "addons-431183"
	I1016 17:44:25.204105   13712 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-431183"
	I1016 17:44:25.204127   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.204247   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204398   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204403   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204424   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204541   13712 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-431183"
	I1016 17:44:25.204559   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.204656   13712 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-431183"
	I1016 17:44:25.204682   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.205132   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.205192   13712 addons.go:69] Setting registry=true in profile "addons-431183"
	I1016 17:44:25.205460   13712 addons.go:238] Setting addon registry=true in "addons-431183"
	I1016 17:44:25.205489   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.205962   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.206245   13712 addons.go:69] Setting inspektor-gadget=true in profile "addons-431183"
	I1016 17:44:25.206279   13712 addons.go:238] Setting addon inspektor-gadget=true in "addons-431183"
	I1016 17:44:25.206314   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.206667   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.203853   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.209761   13712 addons.go:69] Setting volcano=true in profile "addons-431183"
	I1016 17:44:25.209819   13712 addons.go:238] Setting addon volcano=true in "addons-431183"
	I1016 17:44:25.209889   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.210044   13712 addons.go:69] Setting volumesnapshots=true in profile "addons-431183"
	I1016 17:44:25.210074   13712 addons.go:238] Setting addon volumesnapshots=true in "addons-431183"
	I1016 17:44:25.210109   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.210391   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210551   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210587   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.210666   13712 out.go:179] * Verifying Kubernetes components...
	I1016 17:44:25.203869   13712 addons.go:69] Setting gcp-auth=true in profile "addons-431183"
	I1016 17:44:25.211151   13712 mustload.go:65] Loading cluster: addons-431183
	I1016 17:44:25.211330   13712 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:44:25.211560   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.213786   13712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 17:44:25.213952   13712 addons.go:69] Setting cloud-spanner=true in profile "addons-431183"
	I1016 17:44:25.213970   13712 addons.go:238] Setting addon cloud-spanner=true in "addons-431183"
	I1016 17:44:25.214001   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.214117   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.214176   13712 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-431183"
	I1016 17:44:25.214266   13712 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-431183"
	I1016 17:44:25.205408   13712 addons.go:69] Setting storage-provisioner=true in profile "addons-431183"
	I1016 17:44:25.214303   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.214320   13712 addons.go:238] Setting addon storage-provisioner=true in "addons-431183"
	I1016 17:44:25.214358   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.203891   13712 addons.go:238] Setting addon ingress=true in "addons-431183"
	I1016 17:44:25.216917   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.217425   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219417   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219789   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.219938   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.256764   13712 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1016 17:44:25.258073   13712 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:44:25.258094   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1016 17:44:25.258167   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.267563   13712 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1016 17:44:25.273781   13712 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:44:25.274239   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1016 17:44:25.275103   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.275355   13712 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1016 17:44:25.280901   13712 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:44:25.280927   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1016 17:44:25.280987   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.284099   13712 out.go:179]   - Using image docker.io/registry:3.0.0
	I1016 17:44:25.285533   13712 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1016 17:44:25.287000   13712 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1016 17:44:25.287027   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1016 17:44:25.287084   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.289888   13712 addons.go:238] Setting addon default-storageclass=true in "addons-431183"
	I1016 17:44:25.290627   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.292851   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:25.294732   13712 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1016 17:44:25.296068   13712 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:44:25.296087   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1016 17:44:25.296140   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.301185   13712 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-431183"
	I1016 17:44:25.301244   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.301726   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	W1016 17:44:25.312040   13712 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1016 17:44:25.317397   13712 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1016 17:44:25.319607   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1016 17:44:25.319661   13712 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1016 17:44:25.319737   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.330100   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:25.333361   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:25.334796   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1016 17:44:25.338227   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:25.339887   13712 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:44:25.339904   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1016 17:44:25.339962   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.346060   13712 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1016 17:44:25.346426   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1016 17:44:25.347532   13712 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1016 17:44:25.349959   13712 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1016 17:44:25.350033   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.352513   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1016 17:44:25.352517   13712 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 17:44:25.357686   13712 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:44:25.357726   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 17:44:25.357790   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.360254   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1016 17:44:25.361773   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1016 17:44:25.363748   13712 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1016 17:44:25.364410   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1016 17:44:25.366819   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1016 17:44:25.368023   13712 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1016 17:44:25.369583   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1016 17:44:25.369672   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.370929   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1016 17:44:25.372617   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1016 17:44:25.374318   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1016 17:44:25.374341   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1016 17:44:25.374401   13712 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1016 17:44:25.374420   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.378465   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1016 17:44:25.378572   13712 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1016 17:44:25.378655   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.385225   13712 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 17:44:25.385248   13712 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 17:44:25.385302   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.385438   13712 out.go:179]   - Using image docker.io/busybox:stable
	I1016 17:44:25.389383   13712 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1016 17:44:25.389991   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.391367   13712 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:44:25.391391   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1016 17:44:25.391455   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.392182   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.394667   13712 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1016 17:44:25.397118   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1016 17:44:25.397146   13712 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1016 17:44:25.397210   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:25.399846   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.406025   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.408196   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.409502   13712 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 17:44:25.410007   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.437617   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.439834   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.439937   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.441076   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.452923   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.455587   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.456553   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.457333   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.473832   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:25.509280   13712 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 17:44:25.597402   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1016 17:44:25.608167   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1016 17:44:25.608193   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1016 17:44:25.614207   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1016 17:44:25.614507   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1016 17:44:25.614526   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1016 17:44:25.633492   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1016 17:44:25.633521   13712 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1016 17:44:25.643482   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1016 17:44:25.643512   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1016 17:44:25.653213   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1016 17:44:25.658151   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1016 17:44:25.661783   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 17:44:25.672536   13712 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1016 17:44:25.672564   13712 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1016 17:44:25.674981   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1016 17:44:25.675005   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 17:44:25.674986   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1016 17:44:25.676493   13712 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:25.676512   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1016 17:44:25.678587   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1016 17:44:25.678605   13712 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1016 17:44:25.679729   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1016 17:44:25.679756   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1016 17:44:25.680641   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1016 17:44:25.681663   13712 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:44:25.681680   13712 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1016 17:44:25.683573   13712 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1016 17:44:25.683594   13712 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1016 17:44:25.717266   13712 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:44:25.717301   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1016 17:44:25.724422   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1016 17:44:25.728882   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:25.733209   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1016 17:44:25.733234   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1016 17:44:25.735015   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1016 17:44:25.735043   13712 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1016 17:44:25.735456   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1016 17:44:25.735474   13712 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1016 17:44:25.763247   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1016 17:44:25.784981   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1016 17:44:25.785008   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1016 17:44:25.786123   13712 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:44:25.786175   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1016 17:44:25.809104   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1016 17:44:25.809206   13712 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1016 17:44:25.849160   13712 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:44:25.849219   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1016 17:44:25.860886   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1016 17:44:25.884159   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1016 17:44:25.884252   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1016 17:44:25.888351   13712 node_ready.go:35] waiting up to 6m0s for node "addons-431183" to be "Ready" ...
	I1016 17:44:25.888617   13712 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1016 17:44:25.908248   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1016 17:44:25.926193   13712 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1016 17:44:25.926306   13712 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1016 17:44:26.018493   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1016 17:44:26.018580   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1016 17:44:26.058191   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1016 17:44:26.058220   13712 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1016 17:44:26.114364   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1016 17:44:26.114387   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1016 17:44:26.152515   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1016 17:44:26.152537   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1016 17:44:26.192501   13712 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:44:26.192526   13712 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1016 17:44:26.230106   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1016 17:44:26.394118   13712 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-431183" context rescaled to 1 replicas
	I1016 17:44:26.849228   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.174172857s)
	I1016 17:44:26.849266   13712 addons.go:479] Verifying addon ingress=true in "addons-431183"
	I1016 17:44:26.849293   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.1742841s)
	I1016 17:44:26.849646   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.174620266s)
	I1016 17:44:26.849703   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.169006801s)
	I1016 17:44:26.849849   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.125397728s)
	I1016 17:44:26.849872   13712 addons.go:479] Verifying addon metrics-server=true in "addons-431183"
	I1016 17:44:26.849964   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121060729s)
	W1016 17:44:26.849992   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:26.850009   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.086732275s)
	I1016 17:44:26.850020   13712 retry.go:31] will retry after 304.232314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:26.850023   13712 addons.go:479] Verifying addon registry=true in "addons-431183"
	I1016 17:44:26.850857   13712 out.go:179] * Verifying ingress addon...
	I1016 17:44:26.851811   13712 out.go:179] * Verifying registry addon...
	I1016 17:44:26.854587   13712 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1016 17:44:26.855384   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1016 17:44:26.859755   13712 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1016 17:44:26.859849   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:26.860174   13712 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 17:44:26.860379   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:27.154590   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:27.290879   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.382591673s)
	I1016 17:44:27.291039   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.430106882s)
	W1016 17:44:27.291086   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:44:27.291110   13712 retry.go:31] will retry after 241.886996ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1016 17:44:27.291141   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.060977615s)
	I1016 17:44:27.291188   13712 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-431183"
	I1016 17:44:27.292464   13712 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-431183 service yakd-dashboard -n yakd-dashboard
	
	I1016 17:44:27.293448   13712 out.go:179] * Verifying csi-hostpath-driver addon...
	I1016 17:44:27.296094   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1016 17:44:27.303793   13712 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 17:44:27.303819   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:27.406789   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:27.406919   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:27.533214   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1016 17:44:27.769568   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:27.769602   13712 retry.go:31] will retry after 340.975823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:27.799003   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:27.857654   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:27.857952   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:27.891355   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:28.111295   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:28.299454   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:28.399843   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:28.399985   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:28.799683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:28.858153   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:28.858370   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:29.299604   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:29.357896   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:29.358037   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:29.799369   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:29.900557   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:29.900816   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.029628   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.496362298s)
	I1016 17:44:30.029696   13712 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.918349698s)
	W1016 17:44:30.029747   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:30.029768   13712 retry.go:31] will retry after 642.750032ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:30.299581   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:44:30.391201   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:30.399745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:30.399837   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.672862   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:30.799815   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:30.857867   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:30.858170   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:31.199645   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:31.199676   13712 retry.go:31] will retry after 672.617502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:31.299871   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:31.400933   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:31.401038   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:31.799367   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:31.857860   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:31.857980   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:31.872993   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:32.300104   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1016 17:44:32.391608   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:32.400945   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:32.401162   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:32.417530   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:32.417561   13712 retry.go:31] will retry after 1.622996807s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:32.799595   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:32.857504   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:32.857901   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:32.938611   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1016 17:44:32.938671   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:32.957192   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:33.061768   13712 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1016 17:44:33.074835   13712 addons.go:238] Setting addon gcp-auth=true in "addons-431183"
	I1016 17:44:33.074891   13712 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:44:33.075283   13712 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:44:33.092937   13712 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1016 17:44:33.092979   13712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:44:33.111570   13712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:44:33.208878   13712 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1016 17:44:33.210665   13712 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1016 17:44:33.212357   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1016 17:44:33.212379   13712 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1016 17:44:33.225763   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1016 17:44:33.225793   13712 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1016 17:44:33.238601   13712 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:44:33.238620   13712 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1016 17:44:33.251598   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1016 17:44:33.299793   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:33.357896   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:33.358516   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:33.568984   13712 addons.go:479] Verifying addon gcp-auth=true in "addons-431183"
	I1016 17:44:33.571329   13712 out.go:179] * Verifying gcp-auth addon...
	I1016 17:44:33.573693   13712 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1016 17:44:33.576324   13712 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1016 17:44:33.576339   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:33.799034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:33.857542   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:33.857745   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.041326   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:34.076266   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:34.299452   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:34.357506   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.358299   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:34.391838   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	W1016 17:44:34.570099   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:34.570137   13712 retry.go:31] will retry after 2.042622617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:34.577131   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:34.798686   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:34.857346   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:34.857907   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:35.076931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:35.298681   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:35.358036   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:35.358271   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:35.576424   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:35.799076   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:35.857592   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:35.858381   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.076664   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:36.299307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:36.357779   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.357975   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:36.577585   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:36.613791   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:36.799325   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:36.857777   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:36.857844   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:36.891648   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:37.076605   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:44:37.147533   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:37.147568   13712 retry.go:31] will retry after 3.288066533s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:37.299214   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:37.358025   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:37.358190   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:37.576913   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:37.799411   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:37.858024   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:37.858286   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:38.076522   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:38.299032   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:38.357932   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:38.358456   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:38.576609   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:38.799474   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:38.858031   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:38.858171   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:39.077328   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:39.298983   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:39.357425   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:39.358404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:39.391026   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:39.576626   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:39.799205   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:39.857968   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:39.858146   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.077057   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:40.299686   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:40.357308   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.358133   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:40.436741   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:40.577897   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:40.798895   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:40.857563   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:40.858205   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:40.967760   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:40.967789   13712 retry.go:31] will retry after 5.688643093s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:41.076216   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:41.298678   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:41.357341   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:41.357930   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:41.391446   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:41.577307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:41.798930   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:41.857876   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:41.858255   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:42.076367   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:42.299068   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:42.357745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:42.357844   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:42.577398   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:42.799786   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:42.857657   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:42.857933   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:43.076982   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:43.299842   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:43.357628   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:43.358381   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:43.391798   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:43.576657   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:43.799228   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:43.857762   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:43.857991   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.076786   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:44.299404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:44.357922   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:44.357977   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.577554   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:44.799151   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:44.857637   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:44.858224   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:45.077362   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:45.299134   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:45.357829   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:45.358028   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:45.577100   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:45.799816   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:45.857901   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:45.858510   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:45.890877   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:46.076476   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:46.299155   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:46.357693   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:46.357817   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:46.576482   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:46.656552   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:46.799313   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:46.857756   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:46.858010   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.076839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:44:47.183257   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:47.183282   13712 retry.go:31] will retry after 4.644458726s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:47.298815   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:47.357656   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.357862   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:47.576876   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:47.799742   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:47.857524   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:47.858011   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:47.891570   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:48.077089   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:48.298669   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:48.358183   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:48.358443   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:48.577285   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:48.799448   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:48.857899   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:48.858030   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:49.077745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:49.299641   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:49.357178   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:49.357645   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:49.577479   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:49.799179   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:49.857918   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:49.858143   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:50.077140   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:50.299196   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:50.357899   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:50.357911   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:50.391414   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:50.577316   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:50.798849   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:50.857515   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:50.857752   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:51.076745   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:51.299571   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:51.358122   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:51.358227   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:51.577314   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:51.799142   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:51.828341   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:44:51.857320   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:51.858023   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:52.077063   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:52.298303   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:52.359096   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:52.359327   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:52.363382   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:44:52.363461   13712 retry.go:31] will retry after 13.305923226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 17:44:52.392011   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:52.576679   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:52.799555   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:52.858158   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:52.858319   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.077119   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:53.298750   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:53.357011   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.357764   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:53.577188   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:53.798795   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:53.857402   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:53.857750   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.077213   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:54.299084   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:54.357888   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:54.358487   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.576628   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:54.799326   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:54.857772   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:54.857864   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:54.891258   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:55.076878   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:55.298748   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:55.357252   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:55.357698   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:55.576926   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:55.799334   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:55.857768   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:55.857840   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.077031   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:56.299848   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:56.357449   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.357913   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:56.577150   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:56.799127   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:56.857656   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:56.858133   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:44:56.891440   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:57.077171   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:57.298669   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:57.358145   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:57.358345   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:57.577219   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:57.798744   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:57.857186   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:57.857932   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:58.078134   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:58.298706   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:58.357392   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:58.357971   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:58.577244   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:58.798742   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:58.857138   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:44:58.857843   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.076847   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:59.299372   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:59.357836   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.358039   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:44:59.391312   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:44:59.576782   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:44:59.799383   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:44:59.858017   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:44:59.858079   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:00.077055   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:00.299167   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:00.357616   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:00.357842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:00.576839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:00.799632   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:00.857994   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:00.858040   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:01.076943   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:01.299594   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:01.358038   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:01.358103   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:01.391513   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:01.577126   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:01.799265   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:01.857656   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:01.857842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:02.076940   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:02.299771   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:02.357383   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:02.357972   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:02.577328   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:02.799270   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:02.857865   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:02.858009   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:03.077360   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:03.299129   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:03.357925   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:03.358243   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:45:03.392063   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:03.576638   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:03.799112   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:03.857639   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:03.857768   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:04.077237   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:04.298788   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:04.357327   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:04.357931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:04.576949   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:04.799548   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:04.858006   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:04.858133   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:05.077311   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:05.299172   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:05.357687   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:05.357917   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:05.577018   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:05.670252   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:05.799425   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:05.858234   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:05.858476   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1016 17:45:05.891917   13712 node_ready.go:57] node "addons-431183" has "Ready":"False" status (will retry)
	I1016 17:45:06.076988   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1016 17:45:06.209029   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:06.209061   13712 retry.go:31] will retry after 13.152751955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:06.299590   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:06.358201   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:06.358244   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:06.578759   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:06.799289   13712 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1016 17:45:06.799309   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:06.858099   13712 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1016 17:45:06.858124   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:06.858289   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:06.892382   13712 node_ready.go:49] node "addons-431183" is "Ready"
	I1016 17:45:06.892414   13712 node_ready.go:38] duration metric: took 41.004036419s for node "addons-431183" to be "Ready" ...
	I1016 17:45:06.892429   13712 api_server.go:52] waiting for apiserver process to appear ...
	I1016 17:45:06.892485   13712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 17:45:06.913453   13712 api_server.go:72] duration metric: took 41.709838831s to wait for apiserver process to appear ...
	I1016 17:45:06.913483   13712 api_server.go:88] waiting for apiserver healthz status ...
	I1016 17:45:06.913504   13712 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1016 17:45:06.918675   13712 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1016 17:45:06.920200   13712 api_server.go:141] control plane version: v1.34.1
	I1016 17:45:06.920316   13712 api_server.go:131] duration metric: took 6.824153ms to wait for apiserver health ...
	I1016 17:45:06.920339   13712 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 17:45:06.963739   13712 system_pods.go:59] 20 kube-system pods found
	I1016 17:45:06.963784   13712 system_pods.go:61] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:06.963796   13712 system_pods.go:61] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:06.963806   13712 system_pods.go:61] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:06.963814   13712 system_pods.go:61] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:06.963822   13712 system_pods.go:61] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:06.963828   13712 system_pods.go:61] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:06.963835   13712 system_pods.go:61] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:06.963841   13712 system_pods.go:61] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:06.963846   13712 system_pods.go:61] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:06.963854   13712 system_pods.go:61] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:06.963860   13712 system_pods.go:61] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:06.963865   13712 system_pods.go:61] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:06.963872   13712 system_pods.go:61] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:06.963895   13712 system_pods.go:61] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:06.963908   13712 system_pods.go:61] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:06.963917   13712 system_pods.go:61] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:06.963925   13712 system_pods.go:61] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:06.963935   13712 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:06.963945   13712 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:06.963952   13712 system_pods.go:61] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:06.963960   13712 system_pods.go:74] duration metric: took 43.572634ms to wait for pod list to return data ...
	I1016 17:45:06.963971   13712 default_sa.go:34] waiting for default service account to be created ...
	I1016 17:45:06.966910   13712 default_sa.go:45] found service account: "default"
	I1016 17:45:06.966939   13712 default_sa.go:55] duration metric: took 2.961133ms for default service account to be created ...
	I1016 17:45:06.966948   13712 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 17:45:07.062187   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.062233   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.062252   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.062263   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.062278   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.062296   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.062308   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.062316   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.062327   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.062332   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.062353   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.062369   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.062375   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.062384   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.062403   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.062412   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.062424   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.062437   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.062449   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.062464   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.062473   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.062496   13712 retry.go:31] will retry after 189.830369ms: missing components: kube-dns
	I1016 17:45:07.076872   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:07.258945   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.258979   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.258989   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.258999   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.259008   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.259025   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.259035   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.259042   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.259051   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.259057   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.259070   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.259078   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.259084   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.259092   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.259103   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.259114   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.259125   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.259133   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.259144   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.259153   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.259164   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.259181   13712 retry.go:31] will retry after 351.861677ms: missing components: kube-dns
	I1016 17:45:07.299537   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:07.358683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:07.358904   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:07.577578   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:07.615929   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.615961   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.615972   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 17:45:07.615982   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.615991   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.616005   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.616011   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.616027   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.616037   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.616042   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.616053   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.616057   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.616065   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.616073   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.616082   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.616096   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.616104   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.616114   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.616125   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.616136   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.616146   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 17:45:07.616165   13712 retry.go:31] will retry after 306.922072ms: missing components: kube-dns
	I1016 17:45:07.800683   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:07.858828   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:07.858841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:07.928380   13712 system_pods.go:86] 20 kube-system pods found
	I1016 17:45:07.928416   13712 system_pods.go:89] "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1016 17:45:07.928434   13712 system_pods.go:89] "coredns-66bc5c9577-75dtc" [78c8df84-91a0-4258-99dc-3cb63420358f] Running
	I1016 17:45:07.928445   13712 system_pods.go:89] "csi-hostpath-attacher-0" [1cd92c52-4deb-4b96-8e95-d000dd51d895] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1016 17:45:07.928453   13712 system_pods.go:89] "csi-hostpath-resizer-0" [5a7f2e9a-0e16-4f9a-89da-404ff25e4115] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1016 17:45:07.928463   13712 system_pods.go:89] "csi-hostpathplugin-lwfnt" [d0e19e01-0ca5-4a49-9f8e-3cd3438fed4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1016 17:45:07.928474   13712 system_pods.go:89] "etcd-addons-431183" [dacbf6c0-3773-4f4e-a814-ed8813ec5a42] Running
	I1016 17:45:07.928480   13712 system_pods.go:89] "kindnet-xm247" [3a190cf7-af44-4a35-8cea-1a4e799fab68] Running
	I1016 17:45:07.928489   13712 system_pods.go:89] "kube-apiserver-addons-431183" [e968414a-90f6-452b-bc3f-2e8e1999b8e4] Running
	I1016 17:45:07.928495   13712 system_pods.go:89] "kube-controller-manager-addons-431183" [ec5d667f-8b35-4c84-a475-78cf546a78a0] Running
	I1016 17:45:07.928509   13712 system_pods.go:89] "kube-ingress-dns-minikube" [b40908b0-a37c-4873-b577-02403cfebda1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1016 17:45:07.928515   13712 system_pods.go:89] "kube-proxy-kxgwk" [1757da5d-0d02-4508-847f-d04b458e7497] Running
	I1016 17:45:07.928524   13712 system_pods.go:89] "kube-scheduler-addons-431183" [67d05e32-dc46-40a7-8aeb-1a581cfc7dfd] Running
	I1016 17:45:07.928532   13712 system_pods.go:89] "metrics-server-85b7d694d7-m2l65" [37717fb0-1759-4af3-aa42-feadddd69063] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1016 17:45:07.928545   13712 system_pods.go:89] "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1016 17:45:07.928557   13712 system_pods.go:89] "registry-6b586f9694-4gxbm" [760d1bfa-750e-4a66-92c9-6f7903ad398c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1016 17:45:07.928565   13712 system_pods.go:89] "registry-creds-764b6fb674-4sqn6" [ff6144d2-13c8-475e-b307-4f201354f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1016 17:45:07.928577   13712 system_pods.go:89] "registry-proxy-r2qlf" [d8893400-4bc4-4eea-9742-a241e52d31e1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1016 17:45:07.928587   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d7fm5" [c4e22bc5-8ea4-423f-93bb-6b31c1ffb3b5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.928603   13712 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tbv8w" [74771ef4-79f1-4980-9a86-e516fbb4e571] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1016 17:45:07.928609   13712 system_pods.go:89] "storage-provisioner" [cf381c97-b27b-46f1-b287-85542c5625d5] Running
	I1016 17:45:07.928622   13712 system_pods.go:126] duration metric: took 961.666538ms to wait for k8s-apps to be running ...
	I1016 17:45:07.928634   13712 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 17:45:07.928684   13712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 17:45:07.945426   13712 system_svc.go:56] duration metric: took 16.775044ms WaitForService to wait for kubelet
	I1016 17:45:07.945456   13712 kubeadm.go:586] duration metric: took 42.741848123s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 17:45:07.945481   13712 node_conditions.go:102] verifying NodePressure condition ...
	I1016 17:45:07.948757   13712 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 17:45:07.948787   13712 node_conditions.go:123] node cpu capacity is 8
	I1016 17:45:07.948803   13712 node_conditions.go:105] duration metric: took 3.316577ms to run NodePressure ...
	I1016 17:45:07.948814   13712 start.go:241] waiting for startup goroutines ...
	I1016 17:45:08.077904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:08.300102   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:08.401841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:08.403842   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:08.577562   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:08.800154   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:08.859375   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:08.861087   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.078356   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:09.299218   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:09.358500   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:09.359122   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.577317   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:09.801263   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:09.858471   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:09.858526   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.077389   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:10.300270   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:10.402929   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.402927   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:10.576997   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:10.800924   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:10.858147   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:10.858201   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:11.077515   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:11.300178   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:11.358128   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:11.358236   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:11.576779   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:11.799682   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:11.857638   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:11.858243   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.077428   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:12.299776   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:12.358166   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.358211   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:12.577518   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:12.799560   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:12.858618   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:12.858878   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:13.080610   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:13.300257   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:13.357950   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:13.357954   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:13.576896   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:13.799455   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:13.900065   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:13.900078   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:14.076852   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:14.299757   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:14.358382   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:14.358437   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:14.577557   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:14.800042   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:14.901450   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:14.901478   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.077307   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:15.300740   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:15.358385   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:15.358557   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.576627   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:15.799967   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:15.857664   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:15.857931   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:16.077189   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:16.299582   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:16.358242   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:16.358396   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:16.577413   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:16.799914   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:16.857517   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:16.858144   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:17.077167   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:17.300060   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:17.357639   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:17.358030   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:17.577171   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:17.799556   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:17.858440   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:17.858531   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.077700   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:18.300365   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:18.358602   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:18.358638   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.577627   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:18.800110   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:18.858279   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:18.858527   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.077531   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:19.300372   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:19.358331   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.358528   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:19.362658   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1016 17:45:19.576904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:19.801504   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:19.857512   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:19.858037   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1016 17:45:19.967203   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:19.967239   13712 retry.go:31] will retry after 30.485247864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1016 17:45:20.077623   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:20.300648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:20.358400   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:20.358423   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:20.577218   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:20.799070   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:20.857751   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:20.858231   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.077518   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:21.299824   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:21.358681   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:21.358735   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.577445   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:21.799523   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:21.857863   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:21.857867   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:22.076791   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:22.299881   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:22.357706   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:22.357904   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:22.577239   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:22.799400   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:22.859010   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:22.859104   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.076922   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:23.300890   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:23.357870   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:23.358274   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.576648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:23.799795   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:23.858974   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:23.859159   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.076835   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:24.299932   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.357606   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.358142   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:24.576907   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:24.799593   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:24.857981   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:24.858008   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.110886   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:25.299961   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.357582   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.358202   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:25.577007   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:25.799550   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:25.859259   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:25.860615   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.078468   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.300071   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.357785   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:26.358330   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.577445   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:26.800034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:26.858537   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:26.858642   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.077707   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.372329   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:27.372407   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.372432   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.577629   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:27.800285   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:27.858036   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:27.858085   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.076839   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:28.300384   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:28.358153   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:28.358294   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.577661   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:28.802246   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:28.869380   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:28.869475   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.076937   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.300564   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.358512   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:29.358572   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.577478   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:29.800121   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:29.858167   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:29.858454   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.077383   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.299743   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.358422   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:30.358469   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:30.686620   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:30.799883   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:30.857393   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:30.858006   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.076647   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.346316   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.357690   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.357901   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:31.576432   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:31.800242   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:31.901289   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:31.901354   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:32.077376   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.299103   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.357910   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:32.358422   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.576993   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:32.800747   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:32.901272   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:32.901891   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.077173   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.300370   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.401489   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:33.401620   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.577536   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:33.800054   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:33.858084   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:33.858232   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.077145   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.299026   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.357615   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:34.358314   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.577155   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:34.799009   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:34.858034   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:34.858372   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.076909   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.309581   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.358797   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.358919   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:35.577305   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:35.799780   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:35.858509   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:35.858768   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.077428   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.309520   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.359470   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1016 17:45:36.359664   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.578471   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:36.800195   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:36.885941   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:36.990204   13712 kapi.go:107] duration metric: took 1m10.134815412s to wait for kubernetes.io/minikube-addons=registry ...
	I1016 17:45:37.212391   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.299369   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.358396   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:37.577138   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:37.799667   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:37.900319   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.076974   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.300176   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.358203   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:38.577699   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:38.815466   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:38.883225   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.076841   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.338648   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.358737   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:39.577552   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:39.799799   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:39.861162   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.077095   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.299282   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.358423   13712 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1016 17:45:40.577224   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:40.799825   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:40.857745   13712 kapi.go:107] duration metric: took 1m14.003152504s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1016 17:45:41.076882   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.300389   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:41.682404   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:41.799792   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.076483   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.299769   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:42.576516   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:42.799969   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:43.076817   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.300038   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:43.577002   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:43.799545   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.077385   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.299240   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:44.577366   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:44.799892   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.076681   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.299551   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:45.577254   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1016 17:45:45.799257   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.076974   13712 kapi.go:107] duration metric: took 1m12.503275751s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1016 17:45:46.078923   13712 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-431183 cluster.
	I1016 17:45:46.080989   13712 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1016 17:45:46.082058   13712 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1016 17:45:46.300498   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:46.799670   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.300569   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:47.799214   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:48.300023   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:48.799210   13712 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1016 17:45:49.299899   13712 kapi.go:107] duration metric: took 1m22.003802781s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1016 17:45:50.452862   13712 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1016 17:45:50.996759   13712 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1016 17:45:50.996857   13712 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1016 17:45:50.999549   13712 out.go:179] * Enabled addons: registry-creds, ingress-dns, nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, amd-gpu-device-plugin, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1016 17:45:51.001122   13712 addons.go:514] duration metric: took 1m25.797455382s for enable addons: enabled=[registry-creds ingress-dns nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner amd-gpu-device-plugin metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1016 17:45:51.001174   13712 start.go:246] waiting for cluster config update ...
	I1016 17:45:51.001197   13712 start.go:255] writing updated cluster config ...
	I1016 17:45:51.001522   13712 ssh_runner.go:195] Run: rm -f paused
	I1016 17:45:51.006259   13712 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:45:51.009658   13712 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-75dtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.014060   13712 pod_ready.go:94] pod "coredns-66bc5c9577-75dtc" is "Ready"
	I1016 17:45:51.014089   13712 pod_ready.go:86] duration metric: took 4.410303ms for pod "coredns-66bc5c9577-75dtc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.015922   13712 pod_ready.go:83] waiting for pod "etcd-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.019406   13712 pod_ready.go:94] pod "etcd-addons-431183" is "Ready"
	I1016 17:45:51.019424   13712 pod_ready.go:86] duration metric: took 3.485204ms for pod "etcd-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.021180   13712 pod_ready.go:83] waiting for pod "kube-apiserver-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.024534   13712 pod_ready.go:94] pod "kube-apiserver-addons-431183" is "Ready"
	I1016 17:45:51.024558   13712 pod_ready.go:86] duration metric: took 3.356895ms for pod "kube-apiserver-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.026249   13712 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.410357   13712 pod_ready.go:94] pod "kube-controller-manager-addons-431183" is "Ready"
	I1016 17:45:51.410391   13712 pod_ready.go:86] duration metric: took 384.117954ms for pod "kube-controller-manager-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:51.610570   13712 pod_ready.go:83] waiting for pod "kube-proxy-kxgwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.010346   13712 pod_ready.go:94] pod "kube-proxy-kxgwk" is "Ready"
	I1016 17:45:52.010374   13712 pod_ready.go:86] duration metric: took 399.782985ms for pod "kube-proxy-kxgwk" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.210817   13712 pod_ready.go:83] waiting for pod "kube-scheduler-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.610356   13712 pod_ready.go:94] pod "kube-scheduler-addons-431183" is "Ready"
	I1016 17:45:52.610391   13712 pod_ready.go:86] duration metric: took 399.549305ms for pod "kube-scheduler-addons-431183" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 17:45:52.610405   13712 pod_ready.go:40] duration metric: took 1.604114134s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 17:45:52.654980   13712 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 17:45:52.657098   13712 out.go:179] * Done! kubectl is now configured to use "addons-431183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 17:45:48 addons-431183 crio[771]: time="2025-10-16T17:45:48.623580284Z" level=info msg="Starting container: 5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7" id=dfe7a10c-1c6f-4cc2-a29a-423cbef40171 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 17:45:48 addons-431183 crio[771]: time="2025-10-16T17:45:48.626375011Z" level=info msg="Started container" PID=6208 containerID=5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7 description=kube-system/csi-hostpathplugin-lwfnt/csi-snapshotter id=dfe7a10c-1c6f-4cc2-a29a-423cbef40171 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d6a0e7fe0b48f5f7a9451f25d9ea1d2a2afd70d4f76e5c440f8cd97ba0a7196
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.498060173Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3cd71680-d2cd-4a37-a9c3-9e4a8db71adf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.498167767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.503967555Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0c1cb091f216375a119d3debe8b025a73c76b761e35295a26cfeeccfa1c62880 UID:9bcd0883-4637-415b-979c-50c3856ec728 NetNS:/var/run/netns/640c6e67-786d-4742-862f-ca6b72a93a38 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00037c848}] Aliases:map[]}"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.503995462Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.51338565Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0c1cb091f216375a119d3debe8b025a73c76b761e35295a26cfeeccfa1c62880 UID:9bcd0883-4637-415b-979c-50c3856ec728 NetNS:/var/run/netns/640c6e67-786d-4742-862f-ca6b72a93a38 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00037c848}] Aliases:map[]}"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.513502485Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.514393389Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.51560622Z" level=info msg="Ran pod sandbox 0c1cb091f216375a119d3debe8b025a73c76b761e35295a26cfeeccfa1c62880 with infra container: default/busybox/POD" id=3cd71680-d2cd-4a37-a9c3-9e4a8db71adf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.516803348Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=62e3c9dd-8109-48ef-9093-d2a34345e590 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.516931823Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=62e3c9dd-8109-48ef-9093-d2a34345e590 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.516989461Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=62e3c9dd-8109-48ef-9093-d2a34345e590 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.517555514Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc4e110c-6c4f-4095-a9fb-966d71d639ce name=/runtime.v1.ImageService/PullImage
	Oct 16 17:45:53 addons-431183 crio[771]: time="2025-10-16T17:45:53.519075239Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.841581135Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cc4e110c-6c4f-4095-a9fb-966d71d639ce name=/runtime.v1.ImageService/PullImage
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.842188467Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c558dc0c-1634-48b3-8a96-9b2675bf78ac name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.843707532Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7a578946-7ae4-4aaa-9605-21c59514cd66 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.848010849Z" level=info msg="Creating container: default/busybox/busybox" id=d8133681-5c7b-4bb3-8554-14b965df1fa0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.848745849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.853885212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.854345186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.886019067Z" level=info msg="Created container a3c5b6e2d8d5e2da46c21abe6ed35ea66f6d0ccc024ce541d19098d2813ab33a: default/busybox/busybox" id=d8133681-5c7b-4bb3-8554-14b965df1fa0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.886731097Z" level=info msg="Starting container: a3c5b6e2d8d5e2da46c21abe6ed35ea66f6d0ccc024ce541d19098d2813ab33a" id=41d3d3f6-4c09-4a17-a031-3cbf467835e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 17:45:54 addons-431183 crio[771]: time="2025-10-16T17:45:54.888505827Z" level=info msg="Started container" PID=6337 containerID=a3c5b6e2d8d5e2da46c21abe6ed35ea66f6d0ccc024ce541d19098d2813ab33a description=default/busybox/busybox id=41d3d3f6-4c09-4a17-a031-3cbf467835e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c1cb091f216375a119d3debe8b025a73c76b761e35295a26cfeeccfa1c62880
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a3c5b6e2d8d5e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   0c1cb091f2163       busybox                                     default
	5de201fd76a95       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	d2b409cc61d3e       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	08fc54b7ecf7c       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	65f92eb5c9126       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	91299ba87caea       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 17 seconds ago       Running             gcp-auth                                 0                   adb3b0e94c263       gcp-auth-78565c9fb4-bjwlm                   gcp-auth
	99bd9e93e1a1c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	f92ce88c96cd1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            20 seconds ago       Running             gadget                                   0                   dfd40a0715837       gadget-rwgd7                                gadget
	b8426a978ff8c       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             23 seconds ago       Running             controller                               0                   c4b2956e6d733       ingress-nginx-controller-675c5ddd98-5qwrf   ingress-nginx
	38a6424f0235c       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago       Running             registry-proxy                           0                   7128c4c1360b4       registry-proxy-r2qlf                        kube-system
	d2446d21f394d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   6d6a0e7fe0b48       csi-hostpathplugin-lwfnt                    kube-system
	a6e738e35332b       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   6012a442be78d       amd-gpu-device-plugin-6bmbl                 kube-system
	b7a0a3afc5b5e       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     30 seconds ago       Running             nvidia-device-plugin-ctr                 0                   1bfa9a3b7995a       nvidia-device-plugin-daemonset-kcsqr        kube-system
	cbbc3b73b7dda       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      38 seconds ago       Running             volume-snapshot-controller               0                   77f4aad1b6981       snapshot-controller-7d9fbc56b8-d7fm5        kube-system
	dcfdf0dfc495c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             39 seconds ago       Running             csi-attacher                             0                   49157714bbb57       csi-hostpath-attacher-0                     kube-system
	0ed8b46f049ef       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   40 seconds ago       Exited              patch                                    0                   3dae293601dad       gcp-auth-certs-patch-hzksf                  gcp-auth
	2ee86d0cb2da0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   40 seconds ago       Exited              create                                   0                   37f68b8c3d669       gcp-auth-certs-create-9m9d6                 gcp-auth
	cc78d2815338b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      40 seconds ago       Running             volume-snapshot-controller               0                   13541f8c7b072       snapshot-controller-7d9fbc56b8-tbv8w        kube-system
	7f6105c26156d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   41 seconds ago       Exited              patch                                    0                   98207222fd8e0       ingress-nginx-admission-patch-54q7q         ingress-nginx
	b29b337a76127       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   41 seconds ago       Exited              create                                   0                   83632961c120a       ingress-nginx-admission-create-74xz8        ingress-nginx
	e825d0a32cabb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              42 seconds ago       Running             csi-resizer                              0                   0d010a48c2a8a       csi-hostpath-resizer-0                      kube-system
	c1ee69de8a39e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              43 seconds ago       Running             yakd                                     0                   c7bdd7d1294f3       yakd-dashboard-5ff678cb9-6dx84              yakd-dashboard
	f272694b208ca       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             45 seconds ago       Running             local-path-provisioner                   0                   0d06f97ffa6a6       local-path-provisioner-648f6765c9-vrpng     local-path-storage
	d489a26138352       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               46 seconds ago       Running             cloud-spanner-emulator                   0                   f5c45c23a757e       cloud-spanner-emulator-86bd5cbb97-6ncpk     default
	eec1c645d1dfa       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           49 seconds ago       Running             registry                                 0                   e5a389c40cfcb       registry-6b586f9694-4gxbm                   kube-system
	8eb1df0ef8e8f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               50 seconds ago       Running             minikube-ingress-dns                     0                   9124f5ca54c18       kube-ingress-dns-minikube                   kube-system
	eeac328352576       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        54 seconds ago       Running             metrics-server                           0                   b7082ce4753e7       metrics-server-85b7d694d7-m2l65             kube-system
	57066b2143979       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             56 seconds ago       Running             coredns                                  0                   7bb941800e21f       coredns-66bc5c9577-75dtc                    kube-system
	a03f0987c6223       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             56 seconds ago       Running             storage-provisioner                      0                   9e1a1cf5d489f       storage-provisioner                         kube-system
	41d8ee3133047       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   55d838a89f39f       kindnet-xm247                               kube-system
	45684000aebf9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   9c5f6ff04e599       kube-proxy-kxgwk                            kube-system
	b6296707185d3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   2e5c5c78c42ed       etcd-addons-431183                          kube-system
	dff4028c6cade       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   3c50e6a4c57be       kube-apiserver-addons-431183                kube-system
	9ddd87f44d89a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   dee15eaa106c4       kube-controller-manager-addons-431183       kube-system
	11a2ed25b01f6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   bc9dea3df5e23       kube-scheduler-addons-431183                kube-system
	
	
	==> coredns [57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0] <==
	[INFO] 10.244.0.18:40727 - 23481 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.004011049s
	[INFO] 10.244.0.18:33026 - 3001 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000083657s
	[INFO] 10.244.0.18:33026 - 3288 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000171656s
	[INFO] 10.244.0.18:45317 - 32063 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000098323s
	[INFO] 10.244.0.18:45317 - 31801 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000127725s
	[INFO] 10.244.0.18:60718 - 12550 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000065524s
	[INFO] 10.244.0.18:60718 - 12263 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000108348s
	[INFO] 10.244.0.18:52845 - 22967 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137081s
	[INFO] 10.244.0.18:52845 - 22547 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000180574s
	[INFO] 10.244.0.22:59215 - 28992 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205327s
	[INFO] 10.244.0.22:53692 - 49813 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000281937s
	[INFO] 10.244.0.22:53584 - 45515 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014575s
	[INFO] 10.244.0.22:44684 - 57192 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000224647s
	[INFO] 10.244.0.22:49118 - 43863 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110237s
	[INFO] 10.244.0.22:34813 - 44730 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162948s
	[INFO] 10.244.0.22:36170 - 17331 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003105696s
	[INFO] 10.244.0.22:43372 - 46979 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004197065s
	[INFO] 10.244.0.22:33518 - 24990 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006204545s
	[INFO] 10.244.0.22:40245 - 57371 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007524606s
	[INFO] 10.244.0.22:53825 - 43335 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005256461s
	[INFO] 10.244.0.22:48790 - 64380 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007972273s
	[INFO] 10.244.0.22:57147 - 52564 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006866058s
	[INFO] 10.244.0.22:41544 - 21263 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007119013s
	[INFO] 10.244.0.22:42198 - 55479 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000996149s
	[INFO] 10.244.0.22:47993 - 40381 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001133847s
	
	
	==> describe nodes <==
	Name:               addons-431183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-431183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=addons-431183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T17_44_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-431183
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-431183"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 17:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-431183
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 17:46:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 17:45:51 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 17:45:51 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 17:45:51 +0000   Thu, 16 Oct 2025 17:44:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 17:45:51 +0000   Thu, 16 Oct 2025 17:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-431183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                067683fc-48c6-4d92-80f9-6bb27411d961
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-6ncpk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  gadget                      gadget-rwgd7                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  gcp-auth                    gcp-auth-78565c9fb4-bjwlm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-5qwrf    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         97s
	  kube-system                 amd-gpu-device-plugin-6bmbl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 coredns-66bc5c9577-75dtc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     98s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 csi-hostpathplugin-lwfnt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 etcd-addons-431183                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         105s
	  kube-system                 kindnet-xm247                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      98s
	  kube-system                 kube-apiserver-addons-431183                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-controller-manager-addons-431183        200m (2%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-kxgwk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-scheduler-addons-431183                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 metrics-server-85b7d694d7-m2l65              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         97s
	  kube-system                 nvidia-device-plugin-daemonset-kcsqr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 registry-6b586f9694-4gxbm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-creds-764b6fb674-4sqn6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 registry-proxy-r2qlf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 snapshot-controller-7d9fbc56b8-d7fm5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 snapshot-controller-7d9fbc56b8-tbv8w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  local-path-storage          local-path-provisioner-648f6765c9-vrpng      0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-6dx84               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 96s   kube-proxy       
	  Normal  Starting                 104s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s  kubelet          Node addons-431183 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s  kubelet          Node addons-431183 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s  kubelet          Node addons-431183 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           99s   node-controller  Node addons-431183 event: Registered Node addons-431183 in Controller
	  Normal  NodeReady                57s   kubelet          Node addons-431183 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct16 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001707] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.399718] i8042: Warning: Keylock active
	[  +0.012864] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.005070] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000811] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000778] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001003] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000864] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000895] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000940] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000909] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000875] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.495774] block sda: the capability attribute has been deprecated.
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2] <==
	{"level":"warn","ts":"2025-10-16T17:44:16.744663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.751112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.757844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.770223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.776671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.782881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:16.827805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:27.781949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:27.788316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.212445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.219106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.234613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:44:54.240985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T17:45:30.685209Z","caller":"traceutil/trace.go:172","msg":"trace[774761099] linearizableReadLoop","detail":"{readStateIndex:1112; appliedIndex:1112; }","duration":"109.206438ms","start":"2025-10-16T17:45:30.575976Z","end":"2025-10-16T17:45:30.685182Z","steps":["trace[774761099] 'read index received'  (duration: 109.199129ms)","trace[774761099] 'applied index is now lower than readState.Index'  (duration: 6.327µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T17:45:30.685331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.328442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:30.685407Z","caller":"traceutil/trace.go:172","msg":"trace[764036054] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"109.421754ms","start":"2025-10-16T17:45:30.575971Z","end":"2025-10-16T17:45:30.685392Z","steps":["trace[764036054] 'agreement among raft nodes before linearized reading'  (duration: 109.283876ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:45:30.685413Z","caller":"traceutil/trace.go:172","msg":"trace[1015217370] transaction","detail":"{read_only:false; response_revision:1082; number_of_response:1; }","duration":"110.362935ms","start":"2025-10-16T17:45:30.575036Z","end":"2025-10-16T17:45:30.685399Z","steps":["trace[1015217370] 'process raft request'  (duration: 110.203854ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T17:45:36.988254Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.350532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:36.988319Z","caller":"traceutil/trace.go:172","msg":"trace[782498295] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"130.426718ms","start":"2025-10-16T17:45:36.857878Z","end":"2025-10-16T17:45:36.988305Z","steps":["trace[782498295] 'agreement among raft nodes before linearized reading'  (duration: 73.167239ms)","trace[782498295] 'range keys from in-memory index tree'  (duration: 57.156255ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:36.988579Z","caller":"traceutil/trace.go:172","msg":"trace[824864921] transaction","detail":"{read_only:false; response_revision:1115; number_of_response:1; }","duration":"159.015549ms","start":"2025-10-16T17:45:36.829548Z","end":"2025-10-16T17:45:36.988563Z","steps":["trace[824864921] 'process raft request'  (duration: 101.491659ms)","trace[824864921] 'compare'  (duration: 57.204922ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T17:45:37.210910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.8176ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-16T17:45:37.210970Z","caller":"traceutil/trace.go:172","msg":"trace[1874514587] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1115; }","duration":"134.887486ms","start":"2025-10-16T17:45:37.076068Z","end":"2025-10-16T17:45:37.210956Z","steps":["trace[1874514587] 'agreement among raft nodes before linearized reading'  (duration: 67.082922ms)","trace[1874514587] 'range keys from in-memory index tree'  (duration: 67.706562ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:37.211126Z","caller":"traceutil/trace.go:172","msg":"trace[932068433] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"206.842153ms","start":"2025-10-16T17:45:37.004266Z","end":"2025-10-16T17:45:37.211108Z","steps":["trace[932068433] 'process raft request'  (duration: 206.72694ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T17:45:37.211172Z","caller":"traceutil/trace.go:172","msg":"trace[2080297863] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"217.543749ms","start":"2025-10-16T17:45:36.993611Z","end":"2025-10-16T17:45:37.211155Z","steps":["trace[2080297863] 'process raft request'  (duration: 149.582297ms)","trace[2080297863] 'compare'  (duration: 67.690706ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T17:45:41.523200Z","caller":"traceutil/trace.go:172","msg":"trace[325051896] transaction","detail":"{read_only:false; response_revision:1142; number_of_response:1; }","duration":"113.435004ms","start":"2025-10-16T17:45:41.409751Z","end":"2025-10-16T17:45:41.523186Z","steps":["trace[325051896] 'process raft request'  (duration: 113.329749ms)"],"step_count":1}
	
	
	==> gcp-auth [91299ba87caea68119fa480d693dcbde2ce9a5e0369273f86d1a501c683e5e82] <==
	2025/10/16 17:45:45 GCP Auth Webhook started!
	2025/10/16 17:45:52 Ready to marshal response ...
	2025/10/16 17:45:52 Ready to write response ...
	2025/10/16 17:45:53 Ready to marshal response ...
	2025/10/16 17:45:53 Ready to write response ...
	2025/10/16 17:45:53 Ready to marshal response ...
	2025/10/16 17:45:53 Ready to write response ...
	
	
	==> kernel <==
	 17:46:03 up 28 min,  0 user,  load average: 1.68, 0.73, 0.28
	Linux addons-431183 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d] <==
	I1016 17:44:25.860725       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T17:44:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 17:44:26.154850       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 17:44:26.154870       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 17:44:26.154880       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 17:44:26.155646       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 17:44:56.155013       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 17:44:56.156109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 17:44:56.156142       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 17:44:56.156166       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1016 17:44:57.555766       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 17:44:57.555790       1 metrics.go:72] Registering metrics
	I1016 17:44:57.555881       1 controller.go:711] "Syncing nftables rules"
	I1016 17:45:06.162599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:06.162663       1 main.go:301] handling current node
	I1016 17:45:16.154816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:16.154860       1 main.go:301] handling current node
	I1016 17:45:26.155337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:26.155790       1 main.go:301] handling current node
	I1016 17:45:36.155785       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:36.155814       1 main.go:301] handling current node
	I1016 17:45:46.154703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:46.154766       1 main.go:301] handling current node
	I1016 17:45:56.155104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:45:56.155142       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856] <==
	E1016 17:45:09.774702       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.237.197:443: connect: connection refused" logger="UnhandledError"
	E1016 17:45:09.774765       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1016 17:45:09.776035       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.237.197:443: connect: connection refused" logger="UnhandledError"
	E1016 17:45:09.784855       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.237.197:443: connect: connection refused" logger="UnhandledError"
	W1016 17:45:10.776900       1 handler_proxy.go:99] no RequestInfo found in the context
	W1016 17:45:10.776942       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 17:45:10.776993       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 17:45:10.777018       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1016 17:45:10.776993       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1016 17:45:10.778178       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1016 17:45:14.811829       1 handler_proxy.go:99] no RequestInfo found in the context
	E1016 17:45:14.811902       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.237.197:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1016 17:45:14.811907       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1016 17:45:14.826382       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1016 17:46:01.340546       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60040: use of closed network connection
	E1016 17:46:01.495483       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:60070: use of closed network connection
	
	
	==> kube-controller-manager [9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117] <==
	I1016 17:44:24.194631       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 17:44:24.194769       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 17:44:24.194792       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 17:44:24.194883       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 17:44:24.195093       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 17:44:24.195121       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 17:44:24.195132       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 17:44:24.195431       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 17:44:24.195440       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 17:44:24.195458       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 17:44:24.195579       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 17:44:24.196843       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 17:44:24.199060       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 17:44:24.200228       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 17:44:24.201420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:44:24.207629       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 17:44:24.215031       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1016 17:44:54.205473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1016 17:44:54.205611       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1016 17:44:54.205653       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1016 17:44:54.225751       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1016 17:44:54.229246       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1016 17:44:54.306045       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:44:54.329603       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 17:45:09.200422       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d] <==
	I1016 17:44:25.645489       1 server_linux.go:53] "Using iptables proxy"
	I1016 17:44:25.986615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 17:44:26.086811       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 17:44:26.092821       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 17:44:26.096747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 17:44:26.262565       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 17:44:26.262744       1 server_linux.go:132] "Using iptables Proxier"
	I1016 17:44:26.271429       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 17:44:26.280773       1 server.go:527] "Version info" version="v1.34.1"
	I1016 17:44:26.280868       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:44:26.291510       1 config.go:200] "Starting service config controller"
	I1016 17:44:26.297183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 17:44:26.291704       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 17:44:26.297509       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 17:44:26.291773       1 config.go:106] "Starting endpoint slice config controller"
	I1016 17:44:26.297578       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 17:44:26.293681       1 config.go:309] "Starting node config controller"
	I1016 17:44:26.297642       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 17:44:26.297666       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 17:44:26.399333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 17:44:26.399394       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 17:44:26.399424       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279] <==
	E1016 17:44:17.238364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 17:44:17.238421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:44:17.238451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 17:44:17.238520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:44:17.238570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 17:44:17.238599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:44:17.238602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:44:17.238659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 17:44:17.238694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:44:17.238697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 17:44:17.238759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 17:44:17.238808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 17:44:17.238808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:44:18.084602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 17:44:18.126969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:44:18.149882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:44:18.155121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 17:44:18.160389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:44:18.219495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 17:44:18.386523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:44:18.408875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 17:44:18.441386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 17:44:18.472404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 17:44:18.482372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1016 17:44:18.833771       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 17:45:25 addons-431183 kubelet[1277]: I1016 17:45:25.061591    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2hb8\" (UniqueName: \"kubernetes.io/projected/50e28a64-7d2b-4310-abbe-7650d0f44db0-kube-api-access-l2hb8\") on node \"addons-431183\" DevicePath \"\""
	Oct 16 17:45:25 addons-431183 kubelet[1277]: I1016 17:45:25.772168    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dae293601dadf1c16cf0ae22069a821abc00ff2f8bc68cbd9eeea7ce13a5a87"
	Oct 16 17:45:25 addons-431183 kubelet[1277]: I1016 17:45:25.775943    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f68b8c3d66946b17921126763d6ca513c733a46ef70338cacfa30587c9c8ed"
	Oct 16 17:45:32 addons-431183 kubelet[1277]: I1016 17:45:32.800840    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kcsqr" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:32 addons-431183 kubelet[1277]: I1016 17:45:32.810576    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-kcsqr" podStartSLOduration=1.208823692 podStartE2EDuration="26.810553529s" podCreationTimestamp="2025-10-16 17:45:06 +0000 UTC" firstStartedPulling="2025-10-16 17:45:07.010847613 +0000 UTC m=+47.550935026" lastFinishedPulling="2025-10-16 17:45:32.612577461 +0000 UTC m=+73.152664863" observedRunningTime="2025-10-16 17:45:32.810442384 +0000 UTC m=+73.350529804" watchObservedRunningTime="2025-10-16 17:45:32.810553529 +0000 UTC m=+73.350640949"
	Oct 16 17:45:33 addons-431183 kubelet[1277]: I1016 17:45:33.807063    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-kcsqr" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:33 addons-431183 kubelet[1277]: I1016 17:45:33.807190    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bmbl" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:33 addons-431183 kubelet[1277]: I1016 17:45:33.818168    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-6bmbl" podStartSLOduration=1.524125178 podStartE2EDuration="27.818148656s" podCreationTimestamp="2025-10-16 17:45:06 +0000 UTC" firstStartedPulling="2025-10-16 17:45:07.011926565 +0000 UTC m=+47.552013964" lastFinishedPulling="2025-10-16 17:45:33.305950042 +0000 UTC m=+73.846037442" observedRunningTime="2025-10-16 17:45:33.817643369 +0000 UTC m=+74.357730812" watchObservedRunningTime="2025-10-16 17:45:33.818148656 +0000 UTC m=+74.358236076"
	Oct 16 17:45:34 addons-431183 kubelet[1277]: I1016 17:45:34.811996    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6bmbl" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:36 addons-431183 kubelet[1277]: I1016 17:45:36.819931    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r2qlf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:36 addons-431183 kubelet[1277]: I1016 17:45:36.991206    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-r2qlf" podStartSLOduration=2.056987898 podStartE2EDuration="30.99118888s" podCreationTimestamp="2025-10-16 17:45:06 +0000 UTC" firstStartedPulling="2025-10-16 17:45:07.043072528 +0000 UTC m=+47.583159940" lastFinishedPulling="2025-10-16 17:45:35.97727352 +0000 UTC m=+76.517360922" observedRunningTime="2025-10-16 17:45:36.990053836 +0000 UTC m=+77.530141256" watchObservedRunningTime="2025-10-16 17:45:36.99118888 +0000 UTC m=+77.531276300"
	Oct 16 17:45:37 addons-431183 kubelet[1277]: I1016 17:45:37.822365    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r2qlf" secret="" err="secret \"gcp-auth\" not found"
	Oct 16 17:45:38 addons-431183 kubelet[1277]: E1016 17:45:38.461195    1277 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 16 17:45:38 addons-431183 kubelet[1277]: E1016 17:45:38.461304    1277 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff6144d2-13c8-475e-b307-4f201354f1d0-gcr-creds podName:ff6144d2-13c8-475e-b307-4f201354f1d0 nodeName:}" failed. No retries permitted until 2025-10-16 17:46:10.461280176 +0000 UTC m=+111.001367578 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ff6144d2-13c8-475e-b307-4f201354f1d0-gcr-creds") pod "registry-creds-764b6fb674-4sqn6" (UID: "ff6144d2-13c8-475e-b307-4f201354f1d0") : secret "registry-creds-gcr" not found
	Oct 16 17:45:43 addons-431183 kubelet[1277]: I1016 17:45:43.863351    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-5qwrf" podStartSLOduration=60.493792782 podStartE2EDuration="1m17.863326251s" podCreationTimestamp="2025-10-16 17:44:26 +0000 UTC" firstStartedPulling="2025-10-16 17:45:22.61137819 +0000 UTC m=+63.151465602" lastFinishedPulling="2025-10-16 17:45:39.980911667 +0000 UTC m=+80.520999071" observedRunningTime="2025-10-16 17:45:40.853390546 +0000 UTC m=+81.393477967" watchObservedRunningTime="2025-10-16 17:45:43.863326251 +0000 UTC m=+84.403413671"
	Oct 16 17:45:43 addons-431183 kubelet[1277]: I1016 17:45:43.863461    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-rwgd7" podStartSLOduration=66.863298212 podStartE2EDuration="1m17.863455972s" podCreationTimestamp="2025-10-16 17:44:26 +0000 UTC" firstStartedPulling="2025-10-16 17:45:31.836975851 +0000 UTC m=+72.377063250" lastFinishedPulling="2025-10-16 17:45:42.8371336 +0000 UTC m=+83.377221010" observedRunningTime="2025-10-16 17:45:43.862303332 +0000 UTC m=+84.402390766" watchObservedRunningTime="2025-10-16 17:45:43.863455972 +0000 UTC m=+84.403543392"
	Oct 16 17:45:45 addons-431183 kubelet[1277]: I1016 17:45:45.873382    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-bjwlm" podStartSLOduration=67.464369308 podStartE2EDuration="1m12.873359443s" podCreationTimestamp="2025-10-16 17:44:33 +0000 UTC" firstStartedPulling="2025-10-16 17:45:39.93119841 +0000 UTC m=+80.471285826" lastFinishedPulling="2025-10-16 17:45:45.340188558 +0000 UTC m=+85.880275961" observedRunningTime="2025-10-16 17:45:45.871706103 +0000 UTC m=+86.411793523" watchObservedRunningTime="2025-10-16 17:45:45.873359443 +0000 UTC m=+86.413446863"
	Oct 16 17:45:46 addons-431183 kubelet[1277]: I1016 17:45:46.603205    1277 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 16 17:45:46 addons-431183 kubelet[1277]: I1016 17:45:46.603247    1277 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 16 17:45:48 addons-431183 kubelet[1277]: I1016 17:45:48.893175    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-lwfnt" podStartSLOduration=1.339820028 podStartE2EDuration="42.893143317s" podCreationTimestamp="2025-10-16 17:45:06 +0000 UTC" firstStartedPulling="2025-10-16 17:45:07.021957187 +0000 UTC m=+47.562044596" lastFinishedPulling="2025-10-16 17:45:48.575280473 +0000 UTC m=+89.115367885" observedRunningTime="2025-10-16 17:45:48.89196007 +0000 UTC m=+89.432047490" watchObservedRunningTime="2025-10-16 17:45:48.893143317 +0000 UTC m=+89.433230736"
	Oct 16 17:45:53 addons-431183 kubelet[1277]: I1016 17:45:53.278271    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9bcd0883-4637-415b-979c-50c3856ec728-gcp-creds\") pod \"busybox\" (UID: \"9bcd0883-4637-415b-979c-50c3856ec728\") " pod="default/busybox"
	Oct 16 17:45:53 addons-431183 kubelet[1277]: I1016 17:45:53.278340    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89pp\" (UniqueName: \"kubernetes.io/projected/9bcd0883-4637-415b-979c-50c3856ec728-kube-api-access-s89pp\") pod \"busybox\" (UID: \"9bcd0883-4637-415b-979c-50c3856ec728\") " pod="default/busybox"
	Oct 16 17:45:54 addons-431183 kubelet[1277]: I1016 17:45:54.915669    1277 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.589908464 podStartE2EDuration="1.915649326s" podCreationTimestamp="2025-10-16 17:45:53 +0000 UTC" firstStartedPulling="2025-10-16 17:45:53.517227266 +0000 UTC m=+94.057314675" lastFinishedPulling="2025-10-16 17:45:54.842968138 +0000 UTC m=+95.383055537" observedRunningTime="2025-10-16 17:45:54.914978176 +0000 UTC m=+95.455065606" watchObservedRunningTime="2025-10-16 17:45:54.915649326 +0000 UTC m=+95.455736745"
	Oct 16 17:45:55 addons-431183 kubelet[1277]: I1016 17:45:55.545848    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ece93f8-7c2e-4f4d-b999-87f06e951e1a" path="/var/lib/kubelet/pods/1ece93f8-7c2e-4f4d-b999-87f06e951e1a/volumes"
	Oct 16 17:45:55 addons-431183 kubelet[1277]: I1016 17:45:55.546229    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e28a64-7d2b-4310-abbe-7650d0f44db0" path="/var/lib/kubelet/pods/50e28a64-7d2b-4310-abbe-7650d0f44db0/volumes"
	
	
	==> storage-provisioner [a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424] <==
	W1016 17:45:39.279110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:41.283445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:41.287825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:43.290179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:43.294625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:45.298104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:45.302397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:47.305836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:47.311064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:49.313320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:49.317995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:51.320907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:51.325146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:53.328382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:53.331920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:55.334631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:55.338728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:57.341969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:57.347405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:59.350815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:45:59.355759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:46:01.358755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:46:01.363123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:46:03.365884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:46:03.370741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-431183 -n addons-431183
helpers_test.go:269: (dbg) Run:  kubectl --context addons-431183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q registry-creds-764b6fb674-4sqn6
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q registry-creds-764b6fb674-4sqn6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q registry-creds-764b6fb674-4sqn6: exit status 1 (60.49138ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-74xz8" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-54q7q" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4sqn6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-431183 describe pod ingress-nginx-admission-create-74xz8 ingress-nginx-admission-patch-54q7q registry-creds-764b6fb674-4sqn6: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.755182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:04.069203   22811 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:04.069556   22811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:04.069569   22811 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:04.069576   22811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:04.069899   22811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:04.070238   22811 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:04.070708   22811 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:04.070749   22811 addons.go:606] checking whether the cluster is paused
	I1016 17:46:04.070875   22811 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:04.070890   22811 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:04.071443   22811 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:04.091414   22811 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:04.091472   22811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:04.109674   22811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:04.206287   22811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:04.206387   22811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:04.236857   22811 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:04.236881   22811 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:04.236887   22811 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:04.236892   22811 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:04.236903   22811 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:04.236908   22811 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:04.236912   22811 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:04.236916   22811 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:04.236920   22811 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:04.236927   22811 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:04.236935   22811 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:04.236939   22811 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:04.236946   22811 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:04.236950   22811 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:04.236959   22811 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:04.236982   22811 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:04.236990   22811 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:04.236995   22811 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:04.236999   22811 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:04.237048   22811 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:04.237053   22811 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:04.237056   22811 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:04.237060   22811 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:04.237066   22811 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:04.237071   22811 cri.go:89] found id: ""
	I1016 17:46:04.237127   22811 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:04.252068   22811 out.go:203] 
	W1016 17:46:04.253666   22811 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:04.253691   22811 out.go:285] * 
	* 
	W1016 17:46:04.259003   22811 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:04.260588   22811 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.53s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-6ncpk" [252420f1-59ef-4dbd-84f6-c6d8f041a54f] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003439442s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (244.247147ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:20.737394   24972 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:20.737701   24972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.737725   24972 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:20.737733   24972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:20.737987   24972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:20.738271   24972 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:20.738619   24972 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.738634   24972 addons.go:606] checking whether the cluster is paused
	I1016 17:46:20.738760   24972 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:20.738780   24972 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:20.739508   24972 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:20.758626   24972 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:20.758679   24972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:20.776815   24972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:20.875774   24972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:20.875886   24972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:20.909663   24972 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:20.909690   24972 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:20.909694   24972 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:20.909697   24972 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:20.909699   24972 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:20.909704   24972 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:20.909706   24972 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:20.909709   24972 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:20.909711   24972 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:20.909752   24972 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:20.909760   24972 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:20.909762   24972 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:20.909765   24972 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:20.909768   24972 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:20.909771   24972 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:20.909777   24972 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:20.909783   24972 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:20.909790   24972 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:20.909793   24972 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:20.909795   24972 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:20.909797   24972 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:20.909802   24972 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:20.909805   24972 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:20.909807   24972 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:20.909810   24972 cri.go:89] found id: ""
	I1016 17:46:20.909856   24972 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:20.925653   24972 out.go:203] 
	W1016 17:46:20.926937   24972 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:20.926956   24972 out.go:285] * 
	* 
	W1016 17:46:20.929973   24972 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:20.931249   24972 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.28s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-431183 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-431183 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2064adb7-940c-48fa-a9ae-c8766312414d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2064adb7-940c-48fa-a9ae-c8766312414d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2064adb7-940c-48fa-a9ae-c8766312414d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003319977s
addons_test.go:967: (dbg) Run:  kubectl --context addons-431183 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 ssh "cat /opt/local-path-provisioner/pvc-b51ae802-df03-41ae-8349-d78df8b133fd_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-431183 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-431183 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (276.528581ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:15.046628   24174 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:15.046942   24174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:15.046953   24174 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:15.046959   24174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:15.047230   24174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:15.047522   24174 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:15.047941   24174 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:15.047960   24174 addons.go:606] checking whether the cluster is paused
	I1016 17:46:15.048101   24174 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:15.048118   24174 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:15.048579   24174 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:15.071649   24174 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:15.071709   24174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:15.095490   24174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:15.202039   24174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:15.202129   24174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:15.238369   24174 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:15.238389   24174 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:15.238392   24174 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:15.238395   24174 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:15.238398   24174 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:15.238402   24174 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:15.238404   24174 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:15.238406   24174 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:15.238409   24174 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:15.238421   24174 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:15.238426   24174 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:15.238429   24174 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:15.238433   24174 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:15.238436   24174 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:15.238440   24174 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:15.238445   24174 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:15.238451   24174 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:15.238457   24174 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:15.238461   24174 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:15.238464   24174 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:15.238467   24174 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:15.238470   24174 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:15.238473   24174 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:15.238476   24174 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:15.238485   24174 cri.go:89] found id: ""
	I1016 17:46:15.238599   24174 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:15.253319   24174 out.go:203] 
	W1016 17:46:15.255204   24174 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:15.255228   24174 out.go:285] * 
	* 
	W1016 17:46:15.258841   24174 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:15.264388   24174 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-kcsqr" [895271a9-cb66-441d-924c-5aab58267f88] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003871616s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (246.717676ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:06.789419   22903 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:06.789744   22903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:06.789756   22903 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:06.789760   22903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:06.789938   22903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:06.790197   22903 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:06.790541   22903 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:06.790557   22903 addons.go:606] checking whether the cluster is paused
	I1016 17:46:06.790639   22903 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:06.790648   22903 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:06.791018   22903 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:06.809865   22903 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:06.809921   22903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:06.829497   22903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:06.927452   22903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:06.927523   22903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:06.962152   22903 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:06.962190   22903 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:06.962195   22903 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:06.962200   22903 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:06.962205   22903 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:06.962217   22903 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:06.962222   22903 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:06.962226   22903 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:06.962231   22903 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:06.962242   22903 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:06.962252   22903 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:06.962256   22903 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:06.962259   22903 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:06.962263   22903 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:06.962267   22903 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:06.962282   22903 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:06.962292   22903 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:06.962298   22903 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:06.962302   22903 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:06.962305   22903 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:06.962309   22903 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:06.962312   22903 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:06.962316   22903 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:06.962320   22903 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:06.962324   22903 cri.go:89] found id: ""
	I1016 17:46:06.962372   22903 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:06.977371   22903 out.go:203] 
	W1016 17:46:06.979182   22903 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:06.979217   22903 out.go:285] * 
	* 
	W1016 17:46:06.982222   22903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:06.984096   22903 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6dx84" [37812062-2997-4ca3-b2cd-63e4b972e6f0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005067189s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable yakd --alsologtostderr -v=1: exit status 11 (231.003499ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:09.315168   23136 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:09.315303   23136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:09.315313   23136 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:09.315317   23136 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:09.315520   23136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:09.315773   23136 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:09.316079   23136 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:09.316097   23136 addons.go:606] checking whether the cluster is paused
	I1016 17:46:09.316172   23136 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:09.316183   23136 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:09.316516   23136 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:09.334406   23136 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:09.334454   23136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:09.352274   23136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:09.448526   23136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:09.448605   23136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:09.477463   23136 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:09.477486   23136 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:09.477492   23136 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:09.477497   23136 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:09.477501   23136 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:09.477506   23136 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:09.477510   23136 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:09.477514   23136 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:09.477518   23136 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:09.477525   23136 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:09.477527   23136 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:09.477530   23136 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:09.477532   23136 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:09.477535   23136 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:09.477537   23136 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:09.477554   23136 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:09.477562   23136 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:09.477567   23136 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:09.477569   23136 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:09.477571   23136 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:09.477577   23136 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:09.477579   23136 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:09.477582   23136 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:09.477584   23136 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:09.477587   23136 cri.go:89] found id: ""
	I1016 17:46:09.477621   23136 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:09.491799   23136 out.go:203] 
	W1016 17:46:09.493107   23136 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:09.493127   23136 out.go:285] * 
	* 
	W1016 17:46:09.496313   23136 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:09.497709   23136 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6bmbl" [92edcbbf-d797-4999-8ce6-d9bd732cc23e] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003911417s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-431183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-431183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (245.03301ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:46:06.789920   22904 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:46:06.790188   22904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:06.790198   22904 out.go:374] Setting ErrFile to fd 2...
	I1016 17:46:06.790202   22904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:46:06.790388   22904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:46:06.790621   22904 mustload.go:65] Loading cluster: addons-431183
	I1016 17:46:06.791057   22904 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:06.791071   22904 addons.go:606] checking whether the cluster is paused
	I1016 17:46:06.791216   22904 config.go:182] Loaded profile config "addons-431183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:46:06.791233   22904 host.go:66] Checking if "addons-431183" exists ...
	I1016 17:46:06.791729   22904 cli_runner.go:164] Run: docker container inspect addons-431183 --format={{.State.Status}}
	I1016 17:46:06.810066   22904 ssh_runner.go:195] Run: systemctl --version
	I1016 17:46:06.810125   22904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-431183
	I1016 17:46:06.829153   22904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/addons-431183/id_rsa Username:docker}
	I1016 17:46:06.927739   22904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 17:46:06.927808   22904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 17:46:06.960520   22904 cri.go:89] found id: "5de201fd76a95a80e483d6bee60036aeac5703e71d605551db1f90ed5e9267a7"
	I1016 17:46:06.960543   22904 cri.go:89] found id: "d2b409cc61d3e2b69dd7c9f80ddcd05def59f62a7cefd12f8c55f87dcd6806a2"
	I1016 17:46:06.960549   22904 cri.go:89] found id: "08fc54b7ecf7cfa954b370e40086d3605dd13a25c93db2881d92e1a2ceb32818"
	I1016 17:46:06.960554   22904 cri.go:89] found id: "65f92eb5c9126dc449d2f8032d89aa4e86a59da9246b185c71c940339c685cb2"
	I1016 17:46:06.960558   22904 cri.go:89] found id: "99bd9e93e1a1cfcb509eac334bc64e1eb422d9a27526d7a41c2e44010e147d51"
	I1016 17:46:06.960564   22904 cri.go:89] found id: "38a6424f0235c17e3591fa147b42d05aa0a270d890721e294fd2fe1e8fe7587f"
	I1016 17:46:06.960568   22904 cri.go:89] found id: "d2446d21f394d6a6ae9ea6cae0aa10dad1d25f3e04a34f1daec8456cb08d666b"
	I1016 17:46:06.960573   22904 cri.go:89] found id: "a6e738e35332b188947c0fa616526771177e13dac5478fce260658918fdb41b8"
	I1016 17:46:06.960577   22904 cri.go:89] found id: "b7a0a3afc5b5ee8079bdbc565b4e1804880cfc1ae2c416fa5c0bd0e8745f5bf3"
	I1016 17:46:06.960637   22904 cri.go:89] found id: "cbbc3b73b7dda0bb72f1dc0c07bdb653e82157caf2508af0e2dd8299580cbcc4"
	I1016 17:46:06.960649   22904 cri.go:89] found id: "dcfdf0dfc495c7bf8e3c5deff3d0667e25cef10a0ec40832e0dfed43139faddb"
	I1016 17:46:06.960653   22904 cri.go:89] found id: "cc78d2815338b398d4da9da584c4783f77c9936d139a116db70055573929cea8"
	I1016 17:46:06.960657   22904 cri.go:89] found id: "e825d0a32cabb6134f6fc2e76f0012259095724f465caf717fd20ec00f0f4761"
	I1016 17:46:06.960662   22904 cri.go:89] found id: "eec1c645d1dfa3722eaae814ffa92000df394ff370ff10ad29f463e88bc9c3a4"
	I1016 17:46:06.960666   22904 cri.go:89] found id: "8eb1df0ef8e8f68de86edf699e07499ed7450f104d16fdac57eadedc16c5f057"
	I1016 17:46:06.960673   22904 cri.go:89] found id: "eeac3283525767a2a2b238ada5540b61f7b05df294e06a358fe5559a660e9f17"
	I1016 17:46:06.960678   22904 cri.go:89] found id: "57066b214397994168ca58cce65e7697dc0566e9cbd79ddce6bd447b1bb937a0"
	I1016 17:46:06.960684   22904 cri.go:89] found id: "a03f0987c6223c30fbe18342b589910e10048b7c4dbf5c3f99e594bc81dbb424"
	I1016 17:46:06.960688   22904 cri.go:89] found id: "41d8ee31330479f662f22c3d691f1814f4211417eb9afa4e0e1436fff7459c0d"
	I1016 17:46:06.960692   22904 cri.go:89] found id: "45684000aebf96bc6ace9a9a3a1dc6c5c1d69d5fa2cba670031d05998617678d"
	I1016 17:46:06.960696   22904 cri.go:89] found id: "b6296707185d3b7e279abf2a92f9765e375112a754d9ea74965f9e1abd3911e2"
	I1016 17:46:06.960700   22904 cri.go:89] found id: "dff4028c6cade53db8c168852a237cc808da96d812dcf0a0d74a84d9fd8e1856"
	I1016 17:46:06.960705   22904 cri.go:89] found id: "9ddd87f44d89a802929d4d0b0c661f079cd91f596a5e0d6f5d10e18663962117"
	I1016 17:46:06.960709   22904 cri.go:89] found id: "11a2ed25b01f6ec1f1099f8328d958fe73b716ce518c158023e0e7704045d279"
	I1016 17:46:06.960726   22904 cri.go:89] found id: ""
	I1016 17:46:06.960779   22904 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 17:46:06.976393   22904 out.go:203] 
	W1016 17:46:06.978286   22904 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:46:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 17:46:06.978316   22904 out.go:285] * 
	* 
	W1016 17:46:06.981497   22904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 17:46:06.983169   22904 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-431183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-363627 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-363627 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fkz5q" [7dd10c13-4fc5-4243-8f20-22752bcc2dc1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-363627 -n functional-363627
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-16 18:02:03.15629719 +0000 UTC m=+1112.310991710
functional_test.go:1645: (dbg) Run:  kubectl --context functional-363627 describe po hello-node-connect-7d85dfc575-fkz5q -n default
functional_test.go:1645: (dbg) kubectl --context functional-363627 describe po hello-node-connect-7d85dfc575-fkz5q -n default:
Name:             hello-node-connect-7d85dfc575-fkz5q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-363627/192.168.49.2
Start Time:       Thu, 16 Oct 2025 17:52:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djw6c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-djw6c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fkz5q to functional-363627
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-363627 logs hello-node-connect-7d85dfc575-fkz5q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-363627 logs hello-node-connect-7d85dfc575-fkz5q -n default: exit status 1 (62.312221ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fkz5q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-363627 logs hello-node-connect-7d85dfc575-fkz5q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-363627 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-fkz5q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-363627/192.168.49.2
Start Time:       Thu, 16 Oct 2025 17:52:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djw6c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-djw6c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fkz5q to functional-363627
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-363627 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-363627 logs -l app=hello-node-connect: exit status 1 (68.073785ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fkz5q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-363627 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-363627 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.156.165
IPs:                      10.100.156.165
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32242/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-363627
helpers_test.go:243: (dbg) docker inspect functional-363627:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f",
	        "Created": "2025-10-16T17:49:51.419836685Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36125,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T17:49:51.455355895Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f/hosts",
	        "LogPath": "/var/lib/docker/containers/7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f/7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f-json.log",
	        "Name": "/functional-363627",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-363627:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-363627",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7348ec0110935d654e26e0739c7be5564d95e71c34fedd9d8e30fd3b7111122f",
	                "LowerDir": "/var/lib/docker/overlay2/d709c7ced7f5f39b5f262dacc9662be0bd585cbedebdd0e4d6073bd6e1d47cc5-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d709c7ced7f5f39b5f262dacc9662be0bd585cbedebdd0e4d6073bd6e1d47cc5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d709c7ced7f5f39b5f262dacc9662be0bd585cbedebdd0e4d6073bd6e1d47cc5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d709c7ced7f5f39b5f262dacc9662be0bd585cbedebdd0e4d6073bd6e1d47cc5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-363627",
	                "Source": "/var/lib/docker/volumes/functional-363627/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-363627",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-363627",
	                "name.minikube.sigs.k8s.io": "functional-363627",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a60c4aa8f2dcc9d2f4fb0013f50172373b60cd59ccd8cf83b308ee802a6483ab",
	            "SandboxKey": "/var/run/docker/netns/a60c4aa8f2dc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-363627": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:80:b1:ad:b1:51",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8b9cada530a3a36a545114ce8c999a94468f10eac7343e2d1a8e445df67803d",
	                    "EndpointID": "f7ef102dcbc09aa6d1154b8eaa14f43696b2b415303058c0f7d7845bc4e0ed7c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-363627",
	                        "7348ec011093"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-363627 -n functional-363627
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 logs -n 25: (1.367489225s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount2 --alsologtostderr -v=1 │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ ssh            │ functional-363627 ssh findmnt -T /mount1                                                                          │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ ssh            │ functional-363627 ssh findmnt -T /mount2                                                                          │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ ssh            │ functional-363627 ssh findmnt -T /mount3                                                                          │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ mount          │ -p functional-363627 --kill=true                                                                                  │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ start          │ -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ start          │ -p functional-363627 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-363627 --alsologtostderr -v=1                                                    │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ start          │ -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ ssh            │ functional-363627 ssh sudo cat /etc/test/nested/copy/12375/hosts                                                  │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ update-context │ functional-363627 update-context --alsologtostderr -v=2                                                           │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ update-context │ functional-363627 update-context --alsologtostderr -v=2                                                           │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ update-context │ functional-363627 update-context --alsologtostderr -v=2                                                           │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ image          │ functional-363627 image ls --format short --alsologtostderr                                                       │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ image          │ functional-363627 image ls --format json --alsologtostderr                                                        │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ image          │ functional-363627 image ls --format table --alsologtostderr                                                       │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ image          │ functional-363627 image ls --format yaml --alsologtostderr                                                        │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ ssh            │ functional-363627 ssh pgrep buildkitd                                                                             │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │                     │
	│ image          │ functional-363627 image build -t localhost/my-image:functional-363627 testdata/build --alsologtostderr            │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ image          │ functional-363627 image ls                                                                                        │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 17:52 UTC │ 16 Oct 25 17:52 UTC │
	│ service        │ functional-363627 service list                                                                                    │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 18:01 UTC │ 16 Oct 25 18:02 UTC │
	│ service        │ functional-363627 service list -o json                                                                            │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 18:02 UTC │ 16 Oct 25 18:02 UTC │
	│ service        │ functional-363627 service --namespace=default --https --url hello-node                                            │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 18:02 UTC │                     │
	│ service        │ functional-363627 service hello-node --url --format={{.IP}}                                                       │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 18:02 UTC │                     │
	│ service        │ functional-363627 service hello-node --url                                                                        │ functional-363627 │ jenkins │ v1.37.0 │ 16 Oct 25 18:02 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:52:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:52:22.434777   51117 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:52:22.434887   51117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:22.434895   51117 out.go:374] Setting ErrFile to fd 2...
	I1016 17:52:22.434900   51117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:22.435203   51117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:52:22.435630   51117 out.go:368] Setting JSON to false
	I1016 17:52:22.436581   51117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2090,"bootTime":1760635052,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:52:22.436670   51117 start.go:141] virtualization: kvm guest
	I1016 17:52:22.438491   51117 out.go:179] * [functional-363627] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1016 17:52:22.440448   51117 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:52:22.440496   51117 notify.go:220] Checking for updates...
	I1016 17:52:22.443146   51117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:52:22.444213   51117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:52:22.445495   51117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:52:22.446833   51117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:52:22.451341   51117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:52:22.453340   51117 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:52:22.453986   51117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:52:22.484512   51117 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:52:22.484618   51117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:52:22.557373   51117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-16 17:52:22.543886878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:52:22.557513   51117 docker.go:318] overlay module found
	I1016 17:52:22.560121   51117 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1016 17:52:22.561495   51117 start.go:305] selected driver: docker
	I1016 17:52:22.561513   51117 start.go:925] validating driver "docker" against &{Name:functional-363627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-363627 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:52:22.561645   51117 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:52:22.563759   51117 out.go:203] 
	W1016 17:52:22.565516   51117 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1016 17:52:22.566770   51117 out.go:203] 
	
	
	==> CRI-O <==
	Oct 16 17:52:32 functional-363627 crio[3562]: time="2025-10-16T17:52:32.208050251Z" level=info msg="Starting container: 52280e9e8923577747c08a62bb5f2aa9c65cb763924550120a88430a49b65f04" id=4775bbd2-b626-4226-b228-999663e2eaad name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 17:52:32 functional-363627 crio[3562]: time="2025-10-16T17:52:32.210055502Z" level=info msg="Started container" PID=7422 containerID=52280e9e8923577747c08a62bb5f2aa9c65cb763924550120a88430a49b65f04 description=default/mysql-5bb876957f-qqhbc/mysql id=4775bbd2-b626-4226-b228-999663e2eaad name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebfa079a3ad2f2ad677b7458222399c230c2183f7fba0905bc75d3fac9011878
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.985121129Z" level=info msg="Stopping pod sandbox: db917d04fb9d1d1f0213e42611a4c0a9b530443a472cb5d0e666e356d69390cd" id=c9fb485a-9a56-4567-ac18-31ff75101dfc name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.985171115Z" level=info msg="Stopped pod sandbox (already stopped): db917d04fb9d1d1f0213e42611a4c0a9b530443a472cb5d0e666e356d69390cd" id=c9fb485a-9a56-4567-ac18-31ff75101dfc name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.985572744Z" level=info msg="Removing pod sandbox: db917d04fb9d1d1f0213e42611a4c0a9b530443a472cb5d0e666e356d69390cd" id=43b67a00-b032-4e09-bbd1-124f795fba98 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.988280327Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.98833868Z" level=info msg="Removed pod sandbox: db917d04fb9d1d1f0213e42611a4c0a9b530443a472cb5d0e666e356d69390cd" id=43b67a00-b032-4e09-bbd1-124f795fba98 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.988777166Z" level=info msg="Stopping pod sandbox: 659f18c92ead0bbdceb1192edd3790ceff8be1b4865a560ae8076e5472366134" id=e890e78c-376a-4b79-84de-770014619588 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.988826009Z" level=info msg="Stopped pod sandbox (already stopped): 659f18c92ead0bbdceb1192edd3790ceff8be1b4865a560ae8076e5472366134" id=e890e78c-376a-4b79-84de-770014619588 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.989178073Z" level=info msg="Removing pod sandbox: 659f18c92ead0bbdceb1192edd3790ceff8be1b4865a560ae8076e5472366134" id=25328bba-c5bf-40a1-a300-07325293b414 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.991401047Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.991460194Z" level=info msg="Removed pod sandbox: 659f18c92ead0bbdceb1192edd3790ceff8be1b4865a560ae8076e5472366134" id=25328bba-c5bf-40a1-a300-07325293b414 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.991785407Z" level=info msg="Stopping pod sandbox: b1efb9254bff8b03867bf78b0f31cec5ba5bb3ac0bbb7b347559e2b3797083cd" id=07584da0-0c25-445f-bf02-ec8ce6fa84a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.991820908Z" level=info msg="Stopped pod sandbox (already stopped): b1efb9254bff8b03867bf78b0f31cec5ba5bb3ac0bbb7b347559e2b3797083cd" id=07584da0-0c25-445f-bf02-ec8ce6fa84a3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.9920746Z" level=info msg="Removing pod sandbox: b1efb9254bff8b03867bf78b0f31cec5ba5bb3ac0bbb7b347559e2b3797083cd" id=0d224ecc-b812-4f53-8840-346a2ae10e94 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.994369127Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 17:52:33 functional-363627 crio[3562]: time="2025-10-16T17:52:33.994417978Z" level=info msg="Removed pod sandbox: b1efb9254bff8b03867bf78b0f31cec5ba5bb3ac0bbb7b347559e2b3797083cd" id=0d224ecc-b812-4f53-8840-346a2ae10e94 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 16 17:52:41 functional-363627 crio[3562]: time="2025-10-16T17:52:41.999291769Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fd5ec531-6176-4228-bad6-acc1de13a935 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:52:43 functional-363627 crio[3562]: time="2025-10-16T17:52:43.999784579Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c182a7c7-1438-4572-8f0b-917c92968be6 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:53:22 functional-363627 crio[3562]: time="2025-10-16T17:53:22.999754985Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6468b873-81fb-4d84-98ac-4c84fab55bf7 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:53:26 functional-363627 crio[3562]: time="2025-10-16T17:53:26.999013881Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a37acfb3-0bb0-446e-a930-3797a9b59785 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:54:48 functional-363627 crio[3562]: time="2025-10-16T17:54:48.999118933Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f1fac298-9eca-47c8-b964-50428548157e name=/runtime.v1.ImageService/PullImage
	Oct 16 17:54:57 functional-363627 crio[3562]: time="2025-10-16T17:54:57.998908743Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=30e25a46-11f2-4ae1-8c06-d354bf1e71fa name=/runtime.v1.ImageService/PullImage
	Oct 16 17:57:37 functional-363627 crio[3562]: time="2025-10-16T17:57:37.999460337Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1bdc79d3-79a6-4c5f-9189-fad951ba27b7 name=/runtime.v1.ImageService/PullImage
	Oct 16 17:57:40 functional-363627 crio[3562]: time="2025-10-16T17:57:40.999525353Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7dc7dfe9-496c-441a-892e-bcd54ceac8c4 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	52280e9e89235       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   ebfa079a3ad2f       mysql-5bb876957f-qqhbc                       default
	ae94e75c3f3e9       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   419bd4307b419       dashboard-metrics-scraper-77bf4d6c4c-4l4z9   kubernetes-dashboard
	8b9161aca92be       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   483e6f7b1a7df       kubernetes-dashboard-855c9754f9-kz7v2        kubernetes-dashboard
	a57723767c902       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   ed4c09f0979a1       sp-pod                                       default
	1c309fda9d7bb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   b3fcfb4e080c3       busybox-mount                                default
	527770e800fd6       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   aef17bdb76620       nginx-svc                                    default
	7ff46337fe0b5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   978757105f5c6       kube-apiserver-functional-363627             kube-system
	135240a22fff6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   fee1265bbcdbd       kube-controller-manager-functional-363627    kube-system
	0525e06eb1b93       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   21b80453e30e1       kube-scheduler-functional-363627             kube-system
	fbef52937ec9c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   61eaae34999cf       etcd-functional-363627                       kube-system
	ad9fa84066d41       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   27383992d3954       kube-proxy-s5z52                             kube-system
	233ed179a8b87       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   70be05a2fd1cf       kindnet-x5782                                kube-system
	853bf32209178       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   8ce63373be2ff       coredns-66bc5c9577-sqhph                     kube-system
	54d6b98a47007       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   f002e0dfa27f0       storage-provisioner                          kube-system
	23eb7bd96d679       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   8ce63373be2ff       coredns-66bc5c9577-sqhph                     kube-system
	26449bb599c7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   f002e0dfa27f0       storage-provisioner                          kube-system
	21e42b55e878c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   27383992d3954       kube-proxy-s5z52                             kube-system
	0eafcfb63467d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   70be05a2fd1cf       kindnet-x5782                                kube-system
	45f843510bb83       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   61eaae34999cf       etcd-functional-363627                       kube-system
	3cbb40f010346       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   fee1265bbcdbd       kube-controller-manager-functional-363627    kube-system
	b36342bbccc61       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   21b80453e30e1       kube-scheduler-functional-363627             kube-system
	
	
	==> coredns [23eb7bd96d67927bb7040e382be5c2542ccef968390df147c25f62d437a22ab1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45796 - 13714 "HINFO IN 3116238431109625843.6366813338428274451. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051420354s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [853bf322091782a9e753f59b074803668ce4dc84a49b2189ebfbe6b65df29f70] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33771 - 61411 "HINFO IN 6585480290886245862.7426308808924985138. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.438503643s
	
	
	==> describe nodes <==
	Name:               functional-363627
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-363627
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=functional-363627
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T17_50_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 17:50:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-363627
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:01:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:01:49 +0000   Thu, 16 Oct 2025 17:50:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:01:49 +0000   Thu, 16 Oct 2025 17:50:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:01:49 +0000   Thu, 16 Oct 2025 17:50:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:01:49 +0000   Thu, 16 Oct 2025 17:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-363627
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                6132fe16-5c63-4cf9-87e7-f00f3e5e704b
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-znkcm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-fkz5q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-qqhbc                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m41s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-sqhph                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-363627                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-x5782                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-363627              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-363627     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s5z52                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-363627              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4l4z9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kz7v2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-363627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-363627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-363627 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-363627 event: Registered Node functional-363627 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-363627 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-363627 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-363627 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-363627 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-363627 event: Registered Node functional-363627 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [45f843510bb83af321a07f4a84989911119e03af5b48549ee4a3302b55f627ab] <==
	{"level":"warn","ts":"2025-10-16T17:50:01.093663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.100750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.106907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.126272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.133313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.139769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:50:01.193824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T17:51:14.416210Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-16T17:51:14.416323Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-363627","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-16T17:51:14.416435Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T17:51:21.418124Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-16T17:51:21.418218Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T17:51:21.418272Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-16T17:51:21.418322Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T17:51:21.418387Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-16T17:51:21.418385Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-16T17:51:21.418400Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-16T17:51:21.418421Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-16T17:51:21.418441Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-16T17:51:21.418451Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T17:51:21.418428Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-16T17:51:21.421598Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-16T17:51:21.421662Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-16T17:51:21.421695Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-16T17:51:21.421703Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-363627","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fbef52937ec9c6f0af86a5583c50bafd73feda20361963307d35ab026043750c] <==
	{"level":"warn","ts":"2025-10-16T17:51:35.617622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.633338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.639885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.646265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.652373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.658514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.666486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.673041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.679665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.685938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.692600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.698860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.711878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.718482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.725027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.731291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.737764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.744120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.764681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.775905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.782234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T17:51:35.822439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55564","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:01:35.344746Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1158}
	{"level":"info","ts":"2025-10-16T18:01:35.364055Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1158,"took":"18.901648ms","hash":2491890763,"current-db-size-bytes":3461120,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-16T18:01:35.364098Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2491890763,"revision":1158,"compact-revision":-1}
	
	
	==> kernel <==
	 18:02:04 up 44 min,  0 user,  load average: 0.27, 0.28, 0.33
	Linux functional-363627 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0eafcfb63467d50d7115348f87817587a6e21ad6bfa6e93ddbd8a521a310a14f] <==
	I1016 17:50:09.943660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 17:50:09.944000       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1016 17:50:09.944147       1 main.go:148] setting mtu 1500 for CNI 
	I1016 17:50:09.944167       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 17:50:09.944195       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T17:50:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 17:50:10.143703       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 17:50:10.143746       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 17:50:10.143758       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 17:50:10.144045       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 17:50:40.144176       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 17:50:40.144178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 17:50:40.144178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1016 17:50:40.144178       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1016 17:50:41.444775       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 17:50:41.444796       1 metrics.go:72] Registering metrics
	I1016 17:50:41.444840       1 controller.go:711] "Syncing nftables rules"
	I1016 17:50:50.145141       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:50:50.145182       1 main.go:301] handling current node
	I1016 17:51:00.151595       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:51:00.151627       1 main.go:301] handling current node
	I1016 17:51:10.145563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 17:51:10.145594       1 main.go:301] handling current node
	
	
	==> kindnet [233ed179a8b875efeb15180a0ce50b90f7af29bd8e839be3a6ca93377a52d04d] <==
	I1016 17:59:55.710359       1 main.go:301] handling current node
	I1016 18:00:05.710690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:05.710732       1 main.go:301] handling current node
	I1016 18:00:15.710611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:15.710649       1 main.go:301] handling current node
	I1016 18:00:25.710562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:25.710625       1 main.go:301] handling current node
	I1016 18:00:35.710881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:35.710911       1 main.go:301] handling current node
	I1016 18:00:45.710346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:45.710377       1 main.go:301] handling current node
	I1016 18:00:55.710510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:00:55.710540       1 main.go:301] handling current node
	I1016 18:01:05.710847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:05.710877       1 main.go:301] handling current node
	I1016 18:01:15.709982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:15.710022       1 main.go:301] handling current node
	I1016 18:01:25.710526       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:25.710568       1 main.go:301] handling current node
	I1016 18:01:35.710377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:35.710406       1 main.go:301] handling current node
	I1016 18:01:45.710465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:45.710500       1 main.go:301] handling current node
	I1016 18:01:55.710545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1016 18:01:55.710576       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7ff46337fe0b56d60e83fb77b552d40bec95aee333801f839294bd20e331a28a] <==
	I1016 17:51:36.301281       1 cache.go:39] Caches are synced for autoregister controller
	I1016 17:51:37.018783       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 17:51:37.176771       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1016 17:51:37.384401       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1016 17:51:37.385653       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 17:51:37.390575       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 17:51:37.859948       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 17:51:37.954947       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 17:51:38.011044       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 17:51:38.018897       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 17:51:46.373850       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 17:51:53.330788       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.65.28"}
	I1016 17:51:57.911770       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.18.127"}
	I1016 17:51:59.960537       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.219.122"}
	I1016 17:52:02.821058       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.156.165"}
	E1016 17:52:14.982280       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37592: use of closed network connection
	I1016 17:52:21.264964       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 17:52:21.391966       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.153.60"}
	I1016 17:52:21.406137       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.201.203"}
	E1016 17:52:22.386460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39214: use of closed network connection
	I1016 17:52:23.052927       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.224.156"}
	E1016 17:52:39.188504       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55586: use of closed network connection
	E1016 17:52:40.046704       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55606: use of closed network connection
	E1016 17:52:42.184167       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55622: use of closed network connection
	I1016 18:01:36.220201       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [135240a22fff64215ac939449df5170c891a881cbeee11352217450947c44e20] <==
	I1016 17:51:39.609453       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 17:51:39.609590       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 17:51:39.609588       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-363627"
	I1016 17:51:39.609675       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 17:51:39.610066       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 17:51:39.611938       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 17:51:39.611947       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 17:51:39.613531       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 17:51:39.615632       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 17:51:39.615706       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 17:51:39.615784       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 17:51:39.615825       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 17:51:39.615832       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 17:51:39.615836       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 17:51:39.618875       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 17:51:39.621170       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:51:39.621299       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 17:51:39.624393       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 17:51:39.633020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1016 17:52:21.314420       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1016 17:52:21.321235       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1016 17:52:21.326427       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1016 17:52:21.327673       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1016 17:52:21.331510       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1016 17:52:21.334742       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [3cbb40f010346780e4545e879b7aa9b5c028b04bfb14d41c3eabf8dc1084179a] <==
	I1016 17:50:08.566790       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 17:50:08.566808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 17:50:08.566861       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 17:50:08.566973       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-363627"
	I1016 17:50:08.567089       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 17:50:08.567317       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 17:50:08.567415       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 17:50:08.567460       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 17:50:08.567475       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 17:50:08.567565       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 17:50:08.567671       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 17:50:08.567992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 17:50:08.568041       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 17:50:08.568133       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 17:50:08.568285       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 17:50:08.568397       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 17:50:08.568452       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 17:50:08.569302       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 17:50:08.569332       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 17:50:08.569357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 17:50:08.570415       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 17:50:08.572834       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 17:50:08.572843       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 17:50:08.591432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 17:50:53.575141       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21e42b55e878c2899195b7c037d72cfab89d6ad9e718e64aa5841dba6a408e95] <==
	I1016 17:50:10.273492       1 server_linux.go:53] "Using iptables proxy"
	I1016 17:50:10.333424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 17:50:10.433623       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 17:50:10.433657       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 17:50:10.433751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 17:50:10.453308       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 17:50:10.453352       1 server_linux.go:132] "Using iptables Proxier"
	I1016 17:50:10.459090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 17:50:10.459460       1 server.go:527] "Version info" version="v1.34.1"
	I1016 17:50:10.459481       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:50:10.460783       1 config.go:200] "Starting service config controller"
	I1016 17:50:10.460805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 17:50:10.460827       1 config.go:106] "Starting endpoint slice config controller"
	I1016 17:50:10.460842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 17:50:10.460824       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 17:50:10.460868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 17:50:10.460892       1 config.go:309] "Starting node config controller"
	I1016 17:50:10.460903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 17:50:10.460910       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 17:50:10.561835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 17:50:10.561840       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 17:50:10.561899       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [ad9fa84066d41baf1b301e1071e33fe29efc7d0b720e25c70eac8ae28b91f93c] <==
	I1016 17:51:15.332929       1 server_linux.go:53] "Using iptables proxy"
	I1016 17:51:15.395588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 17:51:15.496606       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 17:51:15.496651       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1016 17:51:15.496758       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 17:51:15.516092       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 17:51:15.516163       1 server_linux.go:132] "Using iptables Proxier"
	I1016 17:51:15.521874       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 17:51:15.522211       1 server.go:527] "Version info" version="v1.34.1"
	I1016 17:51:15.522245       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:51:15.523582       1 config.go:309] "Starting node config controller"
	I1016 17:51:15.523607       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 17:51:15.524098       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 17:51:15.523627       1 config.go:200] "Starting service config controller"
	I1016 17:51:15.523639       1 config.go:106] "Starting endpoint slice config controller"
	I1016 17:51:15.524447       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 17:51:15.524484       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 17:51:15.523648       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 17:51:15.524545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 17:51:15.624523       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 17:51:15.624541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 17:51:15.624595       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0525e06eb1b934f41c855bda5a330394be5da7c94a0151ae6f934897f6f799bd] <==
	I1016 17:51:34.836853       1 serving.go:386] Generated self-signed cert in-memory
	W1016 17:51:36.199269       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 17:51:36.199395       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 17:51:36.199414       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 17:51:36.199424       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 17:51:36.241473       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 17:51:36.241500       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 17:51:36.243276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 17:51:36.243313       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 17:51:36.243664       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 17:51:36.243700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 17:51:36.343746       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [b36342bbccc6153dfd72da579ccf76e68eb6e3e118c45a711df91a86f7df1156] <==
	E1016 17:50:01.586800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:50:01.586808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 17:50:01.586861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 17:50:01.586863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 17:50:01.586862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 17:50:01.586928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 17:50:02.452870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 17:50:02.467326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 17:50:02.556826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 17:50:02.577235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 17:50:02.580344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 17:50:02.608061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 17:50:02.624491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 17:50:02.699000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 17:50:02.749102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 17:50:02.800634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 17:50:02.817644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 17:50:02.957062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1016 17:50:05.182802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 17:51:32.154616       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1016 17:51:32.154633       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 17:51:32.154821       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1016 17:51:32.154850       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1016 17:51:32.154879       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1016 17:51:32.154905       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 16 17:59:22 functional-363627 kubelet[4251]: E1016 17:59:22.999454    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 17:59:32 functional-363627 kubelet[4251]: E1016 17:59:32.998807    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 17:59:38 functional-363627 kubelet[4251]: E1016 17:59:38.000652    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 17:59:44 functional-363627 kubelet[4251]: E1016 17:59:44.998865    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 17:59:49 functional-363627 kubelet[4251]: E1016 17:59:49.998377    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 17:59:59 functional-363627 kubelet[4251]: E1016 17:59:59.999156    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:00:04 functional-363627 kubelet[4251]: E1016 18:00:04.999088    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:00:12 functional-363627 kubelet[4251]: E1016 18:00:12.999153    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:00:18 functional-363627 kubelet[4251]: E1016 18:00:18.998888    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:00:26 functional-363627 kubelet[4251]: E1016 18:00:26.999182    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:00:30 functional-363627 kubelet[4251]: E1016 18:00:30.001023    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:00:40 functional-363627 kubelet[4251]: E1016 18:00:40.998531    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:00:40 functional-363627 kubelet[4251]: E1016 18:00:40.998597    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:00:53 functional-363627 kubelet[4251]: E1016 18:00:53.999161    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:00:54 functional-363627 kubelet[4251]: E1016 18:00:54.998655    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:01:04 functional-363627 kubelet[4251]: E1016 18:01:04.999323    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:01:06 functional-363627 kubelet[4251]: E1016 18:01:06.998316    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:01:18 functional-363627 kubelet[4251]: E1016 18:01:18.999053    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:01:20 functional-363627 kubelet[4251]: E1016 18:01:20.999012    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:01:31 functional-363627 kubelet[4251]: E1016 18:01:31.998785    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:01:33 functional-363627 kubelet[4251]: E1016 18:01:33.999403    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:01:46 functional-363627 kubelet[4251]: E1016 18:01:46.999165    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:01:47 functional-363627 kubelet[4251]: E1016 18:01:47.998991    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	Oct 16 18:02:00 functional-363627 kubelet[4251]: E1016 18:02:00.998980    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fkz5q" podUID="7dd10c13-4fc5-4243-8f20-22752bcc2dc1"
	Oct 16 18:02:02 functional-363627 kubelet[4251]: E1016 18:02:02.998585    4251 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-znkcm" podUID="59e9fb90-d743-4420-a6cd-69e6975ba322"
	
	
	==> kubernetes-dashboard [8b9161aca92be0596dfa35e19887d4fcac488ac51a167ecc2a0eea1ec2251017] <==
	2025/10/16 17:52:24 Starting overwatch
	2025/10/16 17:52:24 Using namespace: kubernetes-dashboard
	2025/10/16 17:52:24 Using in-cluster config to connect to apiserver
	2025/10/16 17:52:24 Using secret token for csrf signing
	2025/10/16 17:52:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 17:52:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 17:52:24 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 17:52:24 Generating JWE encryption key
	2025/10/16 17:52:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 17:52:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 17:52:24 Initializing JWE encryption key from synchronized object
	2025/10/16 17:52:24 Creating in-cluster Sidecar client
	2025/10/16 17:52:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 17:52:24 Serving insecurely on HTTP port: 9090
	2025/10/16 17:52:54 Successful request to sidecar
	
	
	==> storage-provisioner [26449bb599c7b0e58a1e3726720191a3e238427a35958312a9a0be5b3f213f81] <==
	W1016 17:50:50.886257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:50.889610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 17:50:50.985062       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-363627_d4331464-7453-442e-92b9-56fc2d125f48!
	W1016 17:50:52.894672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:52.899958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:54.903652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:54.907907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:56.911692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:56.916052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:58.918924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:50:58.923793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:00.927192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:00.930640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:02.934281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:02.938507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:04.941573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:04.945746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:06.949293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:06.953171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:08.956504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:08.960691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:10.964232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:10.969785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:12.972514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 17:51:12.976995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [54d6b98a4700734daade7762ce185cb0dfdbc1941cb6d4038d3bea316af0a3f6] <==
	W1016 18:01:40.316659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:42.320643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:42.324684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:44.327605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:44.331447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:46.334169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:46.338732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:48.342597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:48.346791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:50.349797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:50.354669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:52.357732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:52.361390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:54.364472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:54.368454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:56.371864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:56.376696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:58.380217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:01:58.384805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:00.387694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:00.392214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:02.395115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:02.399435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:04.402937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:02:04.407798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-363627 -n functional-363627
helpers_test.go:269: (dbg) Run:  kubectl --context functional-363627 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-znkcm hello-node-connect-7d85dfc575-fkz5q
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-363627 describe pod busybox-mount hello-node-75c85bcc94-znkcm hello-node-connect-7d85dfc575-fkz5q
helpers_test.go:290: (dbg) kubectl --context functional-363627 describe pod busybox-mount hello-node-75c85bcc94-znkcm hello-node-connect-7d85dfc575-fkz5q:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-363627/192.168.49.2
	Start Time:       Thu, 16 Oct 2025 17:52:12 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://1c309fda9d7bbcd1787e9dbe04922b70e139b69a6d0daee8a62a0b62354d8518
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 16 Oct 2025 17:52:13 +0000
	      Finished:     Thu, 16 Oct 2025 17:52:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c58nd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-c58nd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-363627
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m52s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.3s (1.3s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m52s  kubelet            Created container: mount-munger
	  Normal  Started    9m52s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-znkcm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-363627/192.168.49.2
	Start Time:       Thu, 16 Oct 2025 17:51:57 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrsq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xrsq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-znkcm to functional-363627
	  Normal   Pulling    7m17s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     3s (x42 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-fkz5q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-363627/192.168.49.2
	Start Time:       Thu, 16 Oct 2025 17:52:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-djw6c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-djw6c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fkz5q to functional-363627
	  Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 image ls --format short --alsologtostderr: (2.248479701s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-363627 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-363627 image ls --format short --alsologtostderr:
I1016 17:52:28.023618   51922 out.go:360] Setting OutFile to fd 1 ...
I1016 17:52:28.023971   51922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:28.023995   51922 out.go:374] Setting ErrFile to fd 2...
I1016 17:52:28.024001   51922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:28.024266   51922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
I1016 17:52:28.025126   51922 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:28.025260   51922 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:28.025814   51922 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
I1016 17:52:28.048627   51922 ssh_runner.go:195] Run: systemctl --version
I1016 17:52:28.048690   51922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
I1016 17:52:28.070033   51922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
I1016 17:52:28.179697   51922 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:52:30.214240   51922 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.034492823s)
W1016 17:52:30.214315   51922 cache_images.go:735] Failed to list images for profile functional-363627 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1016 17:52:30.211446    7232 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-10-16T17:52:30Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-363627 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-363627 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-znkcm" [59e9fb90-d743-4420-a6cd-69e6975ba322] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-363627 -n functional-363627
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-16 18:01:58.237841961 +0000 UTC m=+1107.392536481
functional_test.go:1460: (dbg) Run:  kubectl --context functional-363627 describe po hello-node-75c85bcc94-znkcm -n default
functional_test.go:1460: (dbg) kubectl --context functional-363627 describe po hello-node-75c85bcc94-znkcm -n default:
Name:             hello-node-75c85bcc94-znkcm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-363627/192.168.49.2
Start Time:       Thu, 16 Oct 2025 17:51:57 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrsq6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xrsq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-znkcm to functional-363627
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 9m59s)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-363627 logs hello-node-75c85bcc94-znkcm -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-363627 logs hello-node-75c85bcc94-znkcm -n default: exit status 1 (69.64739ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-znkcm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-363627 logs hello-node-75c85bcc94-znkcm -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image load --daemon kicbase/echo-server:functional-363627 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-363627" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image load --daemon kicbase/echo-server:functional-363627 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-363627" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-363627
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image load --daemon kicbase/echo-server:functional-363627 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-363627" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image save kicbase/echo-server:functional-363627 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1016 17:52:02.081135   46989 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:52:02.081431   46989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:02.081442   46989 out.go:374] Setting ErrFile to fd 2...
	I1016 17:52:02.081446   46989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:02.081653   46989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:52:02.082279   46989 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:52:02.082364   46989 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:52:02.082740   46989 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
	I1016 17:52:02.102157   46989 ssh_runner.go:195] Run: systemctl --version
	I1016 17:52:02.102223   46989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
	I1016 17:52:02.123119   46989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
	I1016 17:52:02.220827   46989 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1016 17:52:02.220892   46989 cache_images.go:254] Failed to load cached images for "functional-363627": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1016 17:52:02.220913   46989 cache_images.go:266] failed pushing to: functional-363627

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-363627
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image save --daemon kicbase/echo-server:functional-363627 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-363627
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-363627: exit status 1 (17.927629ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-363627

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-363627

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 service --namespace=default --https --url hello-node: exit status 115 (531.672609ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32165
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-363627 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 service hello-node --url --format={{.IP}}: exit status 115 (526.857577ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-363627 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 service hello-node --url: exit status 115 (555.260006ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32165
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-363627 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32165
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.23s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-088519 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-088519 --output=json --user=testUser: exit status 80 (2.2323004s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a32106db-c80a-43c3-b64a-ad8c0b658186","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-088519 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"d6cbaa5b-1f95-4dd0-984e-a437667b7f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-16T18:12:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"d8d7b8ef-7cc1-4116-9a24-31473c3a7acb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-088519 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-088519 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-088519 --output=json --user=testUser: exit status 80 (1.705925333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3abc42b6-b8f0-4926-b7b9-2cd1b911aab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-088519 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7c8e73ec-c391-4091-b51a-561c5241fa74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-16T18:12:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"52f50cb0-7821-400d-bafa-791161ead029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-088519 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.71s)

                                                
                                    
x
+
TestPause/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-388667 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-388667 --alsologtostderr -v=5: exit status 80 (2.24989424s)

                                                
                                                
-- stdout --
	* Pausing node pause-388667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:25:09.425479  196043 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:25:09.425737  196043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:09.425748  196043 out.go:374] Setting ErrFile to fd 2...
	I1016 18:25:09.425753  196043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:09.425964  196043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:25:09.426187  196043 out.go:368] Setting JSON to false
	I1016 18:25:09.426227  196043 mustload.go:65] Loading cluster: pause-388667
	I1016 18:25:09.426615  196043 config.go:182] Loaded profile config "pause-388667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:09.427011  196043 cli_runner.go:164] Run: docker container inspect pause-388667 --format={{.State.Status}}
	I1016 18:25:09.449094  196043 host.go:66] Checking if "pause-388667" exists ...
	I1016 18:25:09.449353  196043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:09.520852  196043 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-16 18:25:09.50775576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:09.521505  196043 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-388667 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:25:09.523293  196043 out.go:179] * Pausing node pause-388667 ... 
	I1016 18:25:09.524464  196043 host.go:66] Checking if "pause-388667" exists ...
	I1016 18:25:09.524701  196043 ssh_runner.go:195] Run: systemctl --version
	I1016 18:25:09.524776  196043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:09.544189  196043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:09.643198  196043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:25:09.657019  196043 pause.go:52] kubelet running: true
	I1016 18:25:09.657089  196043 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:25:09.801008  196043 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:25:09.801089  196043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:25:09.872884  196043 cri.go:89] found id: "d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e"
	I1016 18:25:09.872910  196043 cri.go:89] found id: "1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42"
	I1016 18:25:09.872916  196043 cri.go:89] found id: "7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12"
	I1016 18:25:09.872921  196043 cri.go:89] found id: "7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a"
	I1016 18:25:09.872925  196043 cri.go:89] found id: "ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716"
	I1016 18:25:09.872930  196043 cri.go:89] found id: "956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d"
	I1016 18:25:09.872934  196043 cri.go:89] found id: "735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac"
	I1016 18:25:09.872938  196043 cri.go:89] found id: ""
	I1016 18:25:09.872989  196043 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:25:09.886191  196043 retry.go:31] will retry after 210.397941ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:09Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:25:10.097660  196043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:25:10.110658  196043 pause.go:52] kubelet running: false
	I1016 18:25:10.110740  196043 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:25:10.218380  196043 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:25:10.218475  196043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:25:10.283980  196043 cri.go:89] found id: "d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e"
	I1016 18:25:10.284002  196043 cri.go:89] found id: "1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42"
	I1016 18:25:10.284006  196043 cri.go:89] found id: "7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12"
	I1016 18:25:10.284010  196043 cri.go:89] found id: "7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a"
	I1016 18:25:10.284012  196043 cri.go:89] found id: "ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716"
	I1016 18:25:10.284015  196043 cri.go:89] found id: "956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d"
	I1016 18:25:10.284018  196043 cri.go:89] found id: "735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac"
	I1016 18:25:10.284021  196043 cri.go:89] found id: ""
	I1016 18:25:10.284055  196043 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:25:10.295881  196043 retry.go:31] will retry after 459.960248ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:10Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:25:10.756270  196043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:25:10.769754  196043 pause.go:52] kubelet running: false
	I1016 18:25:10.769828  196043 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:25:10.901380  196043 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:25:10.901453  196043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:25:10.981364  196043 cri.go:89] found id: "d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e"
	I1016 18:25:10.981385  196043 cri.go:89] found id: "1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42"
	I1016 18:25:10.981390  196043 cri.go:89] found id: "7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12"
	I1016 18:25:10.981394  196043 cri.go:89] found id: "7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a"
	I1016 18:25:10.981398  196043 cri.go:89] found id: "ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716"
	I1016 18:25:10.981403  196043 cri.go:89] found id: "956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d"
	I1016 18:25:10.981406  196043 cri.go:89] found id: "735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac"
	I1016 18:25:10.981409  196043 cri.go:89] found id: ""
	I1016 18:25:10.981454  196043 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:25:10.995537  196043 retry.go:31] will retry after 309.441352ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:10Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:25:11.305914  196043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:25:11.319415  196043 pause.go:52] kubelet running: false
	I1016 18:25:11.319472  196043 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:25:11.491014  196043 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:25:11.491136  196043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:25:11.591249  196043 cri.go:89] found id: "d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e"
	I1016 18:25:11.591270  196043 cri.go:89] found id: "1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42"
	I1016 18:25:11.591275  196043 cri.go:89] found id: "7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12"
	I1016 18:25:11.591341  196043 cri.go:89] found id: "7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a"
	I1016 18:25:11.591346  196043 cri.go:89] found id: "ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716"
	I1016 18:25:11.591351  196043 cri.go:89] found id: "956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d"
	I1016 18:25:11.591355  196043 cri.go:89] found id: "735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac"
	I1016 18:25:11.591359  196043 cri.go:89] found id: ""
	I1016 18:25:11.591408  196043 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:25:11.614000  196043 out.go:203] 
	W1016 18:25:11.616159  196043 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:25:11.616214  196043 out.go:285] * 
	* 
	W1016 18:25:11.621315  196043 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:25:11.623401  196043 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-388667 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-388667
helpers_test.go:243: (dbg) docker inspect pause-388667:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c",
	        "Created": "2025-10-16T18:24:20.930514628Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:24:21.528544325Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/hostname",
	        "HostsPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/hosts",
	        "LogPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c-json.log",
	        "Name": "/pause-388667",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-388667:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-388667",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c",
	                "LowerDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-388667",
	                "Source": "/var/lib/docker/volumes/pause-388667/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-388667",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-388667",
	                "name.minikube.sigs.k8s.io": "pause-388667",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "783f075fb56be21ca1578f3f61c29d188acfbb86f5a4678f6521cce2c369cfd1",
	            "SandboxKey": "/var/run/docker/netns/783f075fb56b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-388667": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b7:eb:7e:85:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5cbd667fa9fcdf372acfc559c4f9c9dd391eb2d089d36e1da313b7a5613f9ea3",
	                    "EndpointID": "52ba65efcb23f1e14c6c78240a39465374b595356e9a7e752ee7d609f612d3b2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-388667",
	                        "644aa94f4c50"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-388667 -n pause-388667
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-388667 -n pause-388667: exit status 2 (342.029086ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-388667 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-388667 logs -n 25: (1.114000526s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-298333                                                                                            │ test-preload-298333         │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
	│ start   │ -p scheduled-stop-946894 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --cancel-scheduled                                                                       │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ delete  │ -p scheduled-stop-946894                                                                                          │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ start   │ -p insufficient-storage-114513 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-114513 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ delete  │ -p insufficient-storage-114513                                                                                    │ insufficient-storage-114513 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p offline-crio-747718 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-747718         │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │                     │
	│ start   │ -p pause-388667 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p stopped-upgrade-637548 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-637548      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p running-upgrade-931818 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-931818      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ stop    │ stopped-upgrade-637548 stop                                                                                       │ stopped-upgrade-637548      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p running-upgrade-931818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-931818      │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p stopped-upgrade-637548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-637548      │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │                     │
	│ start   │ -p pause-388667 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ pause   │ -p pause-388667 --alsologtostderr -v=5                                                                            │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │                     │
	│ delete  │ -p running-upgrade-931818                                                                                         │ running-upgrade-931818      │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:25:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:25:02.989839  193726 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:25:02.990095  193726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:02.990107  193726 out.go:374] Setting ErrFile to fd 2...
	I1016 18:25:02.990114  193726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:02.990440  193726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:25:02.991121  193726 out.go:368] Setting JSON to false
	I1016 18:25:02.992331  193726 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4051,"bootTime":1760635052,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:25:02.992412  193726 start.go:141] virtualization: kvm guest
	I1016 18:25:02.994584  193726 out.go:179] * [pause-388667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:25:02.996016  193726 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:25:02.996007  193726 notify.go:220] Checking for updates...
	I1016 18:25:02.999016  193726 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:25:03.000638  193726 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:03.001889  193726 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:25:03.006044  193726 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:25:03.007764  193726 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:25:03.010061  193726 config.go:182] Loaded profile config "pause-388667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:03.010791  193726 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:25:03.037844  193726 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:25:03.037937  193726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:03.101615  193726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-16 18:25:03.090962698 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:03.101768  193726 docker.go:318] overlay module found
	I1016 18:25:03.104221  193726 out.go:179] * Using the docker driver based on existing profile
	I1016 18:25:03.105560  193726 start.go:305] selected driver: docker
	I1016 18:25:03.105581  193726 start.go:925] validating driver "docker" against &{Name:pause-388667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-388667 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:25:03.105684  193726 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:25:03.105793  193726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:03.170336  193726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-16 18:25:03.159394358 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:03.171178  193726 cni.go:84] Creating CNI manager for ""
	I1016 18:25:03.171244  193726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:03.171303  193726 start.go:349] cluster config:
	{Name:pause-388667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-388667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:25:03.173439  193726 out.go:179] * Starting "pause-388667" primary control-plane node in "pause-388667" cluster
	I1016 18:25:03.174826  193726 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:25:03.176373  193726 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:25:03.177468  193726 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:25:03.177508  193726 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:25:03.177513  193726 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:25:03.177534  193726 cache.go:58] Caching tarball of preloaded images
	I1016 18:25:03.177633  193726 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:25:03.177646  193726 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:25:03.177844  193726 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/config.json ...
	I1016 18:25:03.203801  193726 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:25:03.203822  193726 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:25:03.203871  193726 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:25:03.203906  193726 start.go:360] acquireMachinesLock for pause-388667: {Name:mk7282c5b4dada892a3794a8883e3320d6ea75e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:25:03.204003  193726 start.go:364] duration metric: took 47.91µs to acquireMachinesLock for "pause-388667"
	I1016 18:25:03.204023  193726 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:25:03.204030  193726 fix.go:54] fixHost starting: 
	I1016 18:25:03.204336  193726 cli_runner.go:164] Run: docker container inspect pause-388667 --format={{.State.Status}}
	I1016 18:25:03.223553  193726 fix.go:112] recreateIfNeeded on pause-388667: state=Running err=<nil>
	W1016 18:25:03.223584  193726 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:24:59.394048  193038 out.go:252] * Restarting existing docker container for "stopped-upgrade-637548" ...
	I1016 18:24:59.394106  193038 cli_runner.go:164] Run: docker start stopped-upgrade-637548
	I1016 18:24:59.649288  193038 cli_runner.go:164] Run: docker container inspect stopped-upgrade-637548 --format={{.State.Status}}
	I1016 18:24:59.669441  193038 kic.go:430] container "stopped-upgrade-637548" state is running.
	I1016 18:24:59.670139  193038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-637548
	I1016 18:24:59.689757  193038 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/config.json ...
	I1016 18:24:59.690009  193038 machine.go:93] provisionDockerMachine start ...
	I1016 18:24:59.690082  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:24:59.709441  193038 main.go:141] libmachine: Using SSH client type: native
	I1016 18:24:59.709744  193038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1016 18:24:59.709763  193038 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:24:59.710436  193038 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59462->127.0.0.1:32993: read: connection reset by peer
	I1016 18:25:02.830904  193038 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-637548
	
	I1016 18:25:02.830928  193038 ubuntu.go:182] provisioning hostname "stopped-upgrade-637548"
	I1016 18:25:02.830976  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:02.850507  193038 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:02.850748  193038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1016 18:25:02.850767  193038 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-637548 && echo "stopped-upgrade-637548" | sudo tee /etc/hostname
	I1016 18:25:02.987118  193038 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-637548
	
	I1016 18:25:02.987208  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:03.009531  193038 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:03.009832  193038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1016 18:25:03.009865  193038 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-637548' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-637548/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-637548' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:25:03.138587  193038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:25:03.138615  193038 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:25:03.138735  193038 ubuntu.go:190] setting up certificates
	I1016 18:25:03.138758  193038 provision.go:84] configureAuth start
	I1016 18:25:03.138822  193038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-637548
	I1016 18:25:03.162755  193038 provision.go:143] copyHostCerts
	I1016 18:25:03.162835  193038 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:25:03.162856  193038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:25:03.162945  193038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:25:03.163132  193038 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:25:03.163147  193038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:25:03.163199  193038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:25:03.163309  193038 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:25:03.163321  193038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:25:03.163358  193038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:25:03.163461  193038 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-637548 san=[127.0.0.1 192.168.94.2 localhost minikube stopped-upgrade-637548]
	I1016 18:25:03.435537  193038 provision.go:177] copyRemoteCerts
	I1016 18:25:03.435603  193038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:25:03.435648  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:03.455048  193038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/stopped-upgrade-637548/id_rsa Username:docker}
	I1016 18:25:03.545432  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:25:03.573427  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 18:25:03.600514  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:25:03.627324  193038 provision.go:87] duration metric: took 488.551537ms to configureAuth
	I1016 18:25:03.627353  193038 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:25:03.627544  193038 config.go:182] Loaded profile config "stopped-upgrade-637548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:25:03.627662  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:03.650151  193038 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:03.650386  193038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1016 18:25:03.650412  193038 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:25:03.918821  193038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:25:03.918848  193038 machine.go:96] duration metric: took 4.22882179s to provisionDockerMachine
	I1016 18:25:03.918860  193038 start.go:293] postStartSetup for "stopped-upgrade-637548" (driver="docker")
	I1016 18:25:03.918873  193038 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:25:03.918929  193038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:25:03.918980  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:03.942274  193038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/stopped-upgrade-637548/id_rsa Username:docker}
	I1016 18:25:04.033230  193038 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:25:04.037393  193038 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:25:04.037427  193038 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1016 18:25:04.037440  193038 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1016 18:25:04.037447  193038 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1016 18:25:04.037458  193038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:25:04.037505  193038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:25:04.037584  193038 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:25:04.037695  193038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:25:04.047774  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:25:04.075639  193038 start.go:296] duration metric: took 156.764743ms for postStartSetup
	I1016 18:25:04.075738  193038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:25:04.075790  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:04.097489  193038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/stopped-upgrade-637548/id_rsa Username:docker}
	I1016 18:25:04.182204  193038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:25:04.187109  193038 fix.go:56] duration metric: took 4.813234366s for fixHost
	I1016 18:25:04.187140  193038 start.go:83] releasing machines lock for "stopped-upgrade-637548", held for 4.813283763s
	I1016 18:25:04.187204  193038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-637548
	I1016 18:25:04.206177  193038 ssh_runner.go:195] Run: cat /version.json
	I1016 18:25:04.206223  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:04.206312  193038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:25:04.206388  193038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-637548
	I1016 18:25:04.227258  193038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/stopped-upgrade-637548/id_rsa Username:docker}
	I1016 18:25:04.227244  193038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/stopped-upgrade-637548/id_rsa Username:docker}
	W1016 18:25:04.407740  193038 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1016 18:25:04.407845  193038 ssh_runner.go:195] Run: systemctl --version
	I1016 18:25:04.413700  193038 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:25:04.563450  193038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1016 18:25:04.570056  193038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:25:04.580942  193038 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1016 18:25:04.581024  193038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:25:04.591957  193038 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:25:04.591983  193038 start.go:495] detecting cgroup driver to use...
	I1016 18:25:04.592009  193038 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:25:04.592069  193038 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:25:04.606091  193038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:25:04.619496  193038 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:25:04.619553  193038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:25:04.635654  193038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:25:04.649627  193038 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:25:04.724903  193038 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:25:04.802090  193038 docker.go:234] disabling docker service ...
	I1016 18:25:04.802149  193038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:25:04.818658  193038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:25:04.835376  193038 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:25:04.911769  193038 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:25:05.009637  193038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:25:05.022661  193038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:25:05.042469  193038 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1016 18:25:05.042526  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.054189  193038 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:25:05.054260  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.066467  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.078924  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.091264  193038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:25:05.102142  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.114165  193038 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.132193  193038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.145365  193038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:25:05.156383  193038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:25:05.166658  193038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:05.253011  193038 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:25:05.367906  193038 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:25:05.367964  193038 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:25:05.371856  193038 start.go:563] Will wait 60s for crictl version
	I1016 18:25:05.371916  193038 ssh_runner.go:195] Run: which crictl
	I1016 18:25:05.375767  193038 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1016 18:25:05.420273  193038 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1016 18:25:05.420354  193038 ssh_runner.go:195] Run: crio --version
	I1016 18:25:05.464141  193038 ssh_runner.go:195] Run: crio --version
	I1016 18:25:05.520408  193038 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1016 18:25:03.228846  193726 out.go:252] * Updating the running docker "pause-388667" container ...
	I1016 18:25:03.228900  193726 machine.go:93] provisionDockerMachine start ...
	I1016 18:25:03.228984  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:03.249393  193726 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:03.249611  193726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1016 18:25:03.249626  193726 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:25:03.387151  193726 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-388667
	
	I1016 18:25:03.387181  193726 ubuntu.go:182] provisioning hostname "pause-388667"
	I1016 18:25:03.387239  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:03.407580  193726 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:03.407842  193726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1016 18:25:03.407859  193726 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-388667 && echo "pause-388667" | sudo tee /etc/hostname
	I1016 18:25:03.557648  193726 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-388667
	
	I1016 18:25:03.557770  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:03.579581  193726 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:03.579901  193726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1016 18:25:03.579927  193726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-388667' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-388667/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-388667' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:25:03.720348  193726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:25:03.720373  193726 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:25:03.720411  193726 ubuntu.go:190] setting up certificates
	I1016 18:25:03.720424  193726 provision.go:84] configureAuth start
	I1016 18:25:03.720473  193726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-388667
	I1016 18:25:03.740230  193726 provision.go:143] copyHostCerts
	I1016 18:25:03.740312  193726 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:25:03.740334  193726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:25:03.740403  193726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:25:03.740534  193726 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:25:03.740545  193726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:25:03.740578  193726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:25:03.740678  193726 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:25:03.740690  193726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:25:03.740736  193726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:25:03.740835  193726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.pause-388667 san=[127.0.0.1 192.168.76.2 localhost minikube pause-388667]
	I1016 18:25:03.957011  193726 provision.go:177] copyRemoteCerts
	I1016 18:25:03.957080  193726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:25:03.957123  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:03.980710  193726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:04.090072  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:25:04.112529  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:25:04.131069  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:25:04.149290  193726 provision.go:87] duration metric: took 428.854113ms to configureAuth
	I1016 18:25:04.149325  193726 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:25:04.149556  193726 config.go:182] Loaded profile config "pause-388667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:04.149664  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:04.167616  193726 main.go:141] libmachine: Using SSH client type: native
	I1016 18:25:04.167895  193726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1016 18:25:04.167918  193726 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:25:04.494872  193726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:25:04.494897  193726 machine.go:96] duration metric: took 1.26598944s to provisionDockerMachine
	I1016 18:25:04.494911  193726 start.go:293] postStartSetup for "pause-388667" (driver="docker")
	I1016 18:25:04.494925  193726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:25:04.495018  193726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:25:04.495058  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:04.516457  193726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:04.625440  193726 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:25:04.629769  193726 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:25:04.629798  193726 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:25:04.629811  193726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:25:04.629861  193726 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:25:04.629950  193726 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:25:04.630072  193726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:25:04.639359  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:25:04.659085  193726 start.go:296] duration metric: took 164.152043ms for postStartSetup
	I1016 18:25:04.659184  193726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:25:04.659341  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:04.683250  193726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:04.784434  193726 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:25:04.789595  193726 fix.go:56] duration metric: took 1.58555886s for fixHost
	I1016 18:25:04.789621  193726 start.go:83] releasing machines lock for "pause-388667", held for 1.585607761s
	I1016 18:25:04.789686  193726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-388667
	I1016 18:25:04.811690  193726 ssh_runner.go:195] Run: cat /version.json
	I1016 18:25:04.811782  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:04.811804  193726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:25:04.811880  193726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-388667
	I1016 18:25:04.834602  193726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:04.836099  193726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/pause-388667/id_rsa Username:docker}
	I1016 18:25:04.933851  193726 ssh_runner.go:195] Run: systemctl --version
	I1016 18:25:05.025710  193726 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:25:05.068064  193726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:25:05.074076  193726 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:25:05.074140  193726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:25:05.083535  193726 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:25:05.083559  193726 start.go:495] detecting cgroup driver to use...
	I1016 18:25:05.083591  193726 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:25:05.083633  193726 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:25:05.102816  193726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:25:05.117998  193726 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:25:05.118070  193726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:25:05.136040  193726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:25:05.151125  193726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:25:05.290660  193726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:25:05.429211  193726 docker.go:234] disabling docker service ...
	I1016 18:25:05.429277  193726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:25:05.447009  193726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:25:05.463533  193726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:25:05.606033  193726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:25:05.752080  193726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:25:05.766374  193726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:25:05.783698  193726 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:25:05.783786  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.794515  193726 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:25:05.794564  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.806013  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.816859  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.827179  193726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:25:05.837265  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.848053  193726 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.858299  193726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:25:05.875304  193726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:25:05.883858  193726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:25:05.892007  193726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:06.016940  193726 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:25:06.187221  193726 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:25:06.187289  193726 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:25:06.191552  193726 start.go:563] Will wait 60s for crictl version
	I1016 18:25:06.191608  193726 ssh_runner.go:195] Run: which crictl
	I1016 18:25:06.195555  193726 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:25:06.225586  193726 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:25:06.225664  193726 ssh_runner.go:195] Run: crio --version
	I1016 18:25:06.267939  193726 ssh_runner.go:195] Run: crio --version
	I1016 18:25:06.325613  193726 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1016 18:25:03.568874  180947 node_ready.go:57] node "offline-crio-747718" has "Ready":"False" status (will retry)
	W1016 18:25:06.068401  180947 node_ready.go:57] node "offline-crio-747718" has "Ready":"False" status (will retry)
	I1016 18:25:06.326859  193726 cli_runner.go:164] Run: docker network inspect pause-388667 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:25:06.345833  193726 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 18:25:06.350674  193726 kubeadm.go:883] updating cluster {Name:pause-388667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-388667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:25:06.350865  193726 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:25:06.350922  193726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:25:06.387099  193726 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:25:06.387119  193726 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:25:06.387173  193726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:25:06.442263  193726 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:25:06.442291  193726 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:25:06.442300  193726 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 18:25:06.442431  193726 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-388667 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-388667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:25:06.442518  193726 ssh_runner.go:195] Run: crio config
	I1016 18:25:06.502088  193726 cni.go:84] Creating CNI manager for ""
	I1016 18:25:06.502116  193726 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:06.502137  193726 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:25:06.502164  193726 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-388667 NodeName:pause-388667 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:25:06.502315  193726 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-388667"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:25:06.502400  193726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:25:06.512525  193726 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:25:06.512600  193726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:25:06.521836  193726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1016 18:25:06.535327  193726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:25:06.550238  193726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1016 18:25:06.565277  193726 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:25:06.570090  193726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:06.685967  193726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:25:06.699942  193726 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667 for IP: 192.168.76.2
	I1016 18:25:06.699961  193726 certs.go:195] generating shared ca certs ...
	I1016 18:25:06.699980  193726 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:06.700137  193726 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:25:06.700192  193726 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:25:06.700205  193726 certs.go:257] generating profile certs ...
	I1016 18:25:06.700335  193726 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/client.key
	I1016 18:25:06.700413  193726 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/apiserver.key.091aee1d
	I1016 18:25:06.700462  193726 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/proxy-client.key
	I1016 18:25:06.700600  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:25:06.700642  193726 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:25:06.700655  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:25:06.700691  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:25:06.700740  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:25:06.700773  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:25:06.700831  193726 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:25:06.701587  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:25:06.722777  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:25:06.743381  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:25:06.765941  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:25:06.785387  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:25:06.805600  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:25:06.824997  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:25:06.845868  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1016 18:25:06.867870  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:25:06.890085  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:25:06.912012  193726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:25:06.933947  193726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:25:06.950752  193726 ssh_runner.go:195] Run: openssl version
	I1016 18:25:06.958591  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:25:06.969472  193726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:06.973949  193726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:06.974045  193726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:07.019853  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:25:07.030948  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:25:07.041565  193726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:25:07.046289  193726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:25:07.046340  193726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:25:07.084916  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:25:07.094163  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:25:07.103546  193726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:25:07.107839  193726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:25:07.107914  193726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:25:07.143287  193726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:25:07.152250  193726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:25:07.156520  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:25:07.191995  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:25:07.227116  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:25:07.264277  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:25:07.301115  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:25:07.336447  193726 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:25:07.371778  193726 kubeadm.go:400] StartCluster: {Name:pause-388667 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-388667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:25:07.371884  193726 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:25:07.371942  193726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:25:07.407636  193726 cri.go:89] found id: "d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e"
	I1016 18:25:07.407660  193726 cri.go:89] found id: "1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42"
	I1016 18:25:07.407666  193726 cri.go:89] found id: "7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12"
	I1016 18:25:07.407671  193726 cri.go:89] found id: "7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a"
	I1016 18:25:07.407675  193726 cri.go:89] found id: "ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716"
	I1016 18:25:07.407679  193726 cri.go:89] found id: "956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d"
	I1016 18:25:07.407683  193726 cri.go:89] found id: "735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac"
	I1016 18:25:07.407686  193726 cri.go:89] found id: ""
	I1016 18:25:07.407745  193726 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:25:07.422326  193726 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:25:07Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:25:07.422391  193726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:25:07.432256  193726 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:25:07.432277  193726 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:25:07.432324  193726 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:25:07.442224  193726 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:07.443315  193726 kubeconfig.go:125] found "pause-388667" server: "https://192.168.76.2:8443"
	I1016 18:25:07.444928  193726 kapi.go:59] client config for pause-388667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:25:07.445461  193726 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:25:07.445481  193726 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:25:07.445488  193726 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:25:07.445494  193726 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:25:07.445500  193726 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:25:07.445920  193726 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:25:07.456343  193726 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1016 18:25:07.456379  193726 kubeadm.go:601] duration metric: took 24.09462ms to restartPrimaryControlPlane
	I1016 18:25:07.456391  193726 kubeadm.go:402] duration metric: took 84.619977ms to StartCluster
	I1016 18:25:07.456414  193726 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:07.456489  193726 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:07.457971  193726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:07.458265  193726 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:25:07.458336  193726 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:25:07.458493  193726 config.go:182] Loaded profile config "pause-388667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:07.462879  193726 out.go:179] * Verifying Kubernetes components...
	I1016 18:25:07.462883  193726 out.go:179] * Enabled addons: 
	I1016 18:25:02.929786  190882 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 db4ef01ca93c624089dc0c4105b6aa334128bd311dd99e03b6b0f4e033bf8da0 604d4c8018015947f50808dcbb6120a1fd57cd337ed4c34eb4413ff21f974084 ee682ebf007955f2750b5d2386ce15993bec37bfce8e1195dc81a3e4775bf91c 14f2c5b8a2d9c5644c6eac70c6d38bb81561570a6db2825d188614bfb192d922: (10.634973048s)
	I1016 18:25:02.929865  190882 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:25:02.972775  190882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:25:02.984304  190882 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 16 18:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Oct 16 18:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 16 18:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct 16 18:24 /etc/kubernetes/scheduler.conf
	
	I1016 18:25:02.984364  190882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1016 18:25:02.995866  190882 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:02.995919  190882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:25:03.007674  190882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1016 18:25:03.018036  190882 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:03.018100  190882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:25:03.028647  190882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1016 18:25:03.039046  190882 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:03.039112  190882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:25:03.049729  190882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1016 18:25:03.063066  190882 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:03.063133  190882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:25:03.075933  190882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:25:03.088965  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:03.153606  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:03.715961  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:03.894150  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:03.961374  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:04.024108  190882 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:25:04.024191  190882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:04.524484  190882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:04.540262  190882 api_server.go:72] duration metric: took 516.161314ms to wait for apiserver process to appear ...
	I1016 18:25:04.540290  190882 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:25:04.540310  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:04.540679  190882 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1016 18:25:05.040358  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:06.396757  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:25:06.396785  190882 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:25:06.396799  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:06.453293  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:25:06.453328  190882 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:25:06.540493  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:06.545042  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:25:06.545074  190882 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:25:07.040673  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:07.046244  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:25:07.046275  190882 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:25:07.540932  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:07.545304  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:25:07.545328  190882 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:25:07.464657  193726 addons.go:514] duration metric: took 6.330095ms for enable addons: enabled=[]
	I1016 18:25:07.464703  193726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:07.626466  193726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:25:07.641080  193726 node_ready.go:35] waiting up to 6m0s for node "pause-388667" to be "Ready" ...
	I1016 18:25:07.649459  193726 node_ready.go:49] node "pause-388667" is "Ready"
	I1016 18:25:07.649485  193726 node_ready.go:38] duration metric: took 8.37611ms for node "pause-388667" to be "Ready" ...
	I1016 18:25:07.649497  193726 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:25:07.649547  193726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:07.661351  193726 api_server.go:72] duration metric: took 203.050561ms to wait for apiserver process to appear ...
	I1016 18:25:07.661375  193726 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:25:07.661389  193726 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:25:07.665338  193726 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 18:25:07.666203  193726 api_server.go:141] control plane version: v1.34.1
	I1016 18:25:07.666228  193726 api_server.go:131] duration metric: took 4.846804ms to wait for apiserver health ...
	I1016 18:25:07.666237  193726 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:25:07.669275  193726 system_pods.go:59] 7 kube-system pods found
	I1016 18:25:07.669310  193726 system_pods.go:61] "coredns-66bc5c9577-x5rl8" [b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1] Running
	I1016 18:25:07.669318  193726 system_pods.go:61] "etcd-pause-388667" [c75b898a-96b1-4294-b0ca-9fcbc7ee2fac] Running
	I1016 18:25:07.669322  193726 system_pods.go:61] "kindnet-bk5tb" [72d1a38b-8257-40f2-9d37-c3167d464bbf] Running
	I1016 18:25:07.669326  193726 system_pods.go:61] "kube-apiserver-pause-388667" [91f9df72-74fe-438c-9382-45b53a901351] Running
	I1016 18:25:07.669331  193726 system_pods.go:61] "kube-controller-manager-pause-388667" [a7c6a6e4-edcb-40c8-aef9-8dfc81855e3e] Running
	I1016 18:25:07.669335  193726 system_pods.go:61] "kube-proxy-bkkgz" [34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d] Running
	I1016 18:25:07.669340  193726 system_pods.go:61] "kube-scheduler-pause-388667" [dfbceef0-1e82-478e-b548-da2ec677c75b] Running
	I1016 18:25:07.669348  193726 system_pods.go:74] duration metric: took 3.104113ms to wait for pod list to return data ...
	I1016 18:25:07.669360  193726 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:25:07.671194  193726 default_sa.go:45] found service account: "default"
	I1016 18:25:07.671215  193726 default_sa.go:55] duration metric: took 1.847745ms for default service account to be created ...
	I1016 18:25:07.671225  193726 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:25:07.673590  193726 system_pods.go:86] 7 kube-system pods found
	I1016 18:25:07.673610  193726 system_pods.go:89] "coredns-66bc5c9577-x5rl8" [b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1] Running
	I1016 18:25:07.673615  193726 system_pods.go:89] "etcd-pause-388667" [c75b898a-96b1-4294-b0ca-9fcbc7ee2fac] Running
	I1016 18:25:07.673618  193726 system_pods.go:89] "kindnet-bk5tb" [72d1a38b-8257-40f2-9d37-c3167d464bbf] Running
	I1016 18:25:07.673622  193726 system_pods.go:89] "kube-apiserver-pause-388667" [91f9df72-74fe-438c-9382-45b53a901351] Running
	I1016 18:25:07.673628  193726 system_pods.go:89] "kube-controller-manager-pause-388667" [a7c6a6e4-edcb-40c8-aef9-8dfc81855e3e] Running
	I1016 18:25:07.673633  193726 system_pods.go:89] "kube-proxy-bkkgz" [34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d] Running
	I1016 18:25:07.673639  193726 system_pods.go:89] "kube-scheduler-pause-388667" [dfbceef0-1e82-478e-b548-da2ec677c75b] Running
	I1016 18:25:07.673647  193726 system_pods.go:126] duration metric: took 2.416924ms to wait for k8s-apps to be running ...
	I1016 18:25:07.673657  193726 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:25:07.673696  193726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:25:07.687783  193726 system_svc.go:56] duration metric: took 14.120595ms WaitForService to wait for kubelet
	I1016 18:25:07.687807  193726 kubeadm.go:586] duration metric: took 229.512368ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:25:07.687826  193726 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:25:07.690039  193726 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:25:07.690060  193726 node_conditions.go:123] node cpu capacity is 8
	I1016 18:25:07.690069  193726 node_conditions.go:105] duration metric: took 2.23889ms to run NodePressure ...
	I1016 18:25:07.690080  193726 start.go:241] waiting for startup goroutines ...
	I1016 18:25:07.690087  193726 start.go:246] waiting for cluster config update ...
	I1016 18:25:07.690094  193726 start.go:255] writing updated cluster config ...
	I1016 18:25:07.690347  193726 ssh_runner.go:195] Run: rm -f paused
	I1016 18:25:07.694167  193726 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:25:07.694830  193726 kapi.go:59] client config for pause-388667: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/pause-388667/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:25:07.697331  193726 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x5rl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.701308  193726 pod_ready.go:94] pod "coredns-66bc5c9577-x5rl8" is "Ready"
	I1016 18:25:07.701331  193726 pod_ready.go:86] duration metric: took 3.982067ms for pod "coredns-66bc5c9577-x5rl8" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.703178  193726 pod_ready.go:83] waiting for pod "etcd-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.706803  193726 pod_ready.go:94] pod "etcd-pause-388667" is "Ready"
	I1016 18:25:07.706822  193726 pod_ready.go:86] duration metric: took 3.626474ms for pod "etcd-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.708676  193726 pod_ready.go:83] waiting for pod "kube-apiserver-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.712063  193726 pod_ready.go:94] pod "kube-apiserver-pause-388667" is "Ready"
	I1016 18:25:07.712079  193726 pod_ready.go:86] duration metric: took 3.38546ms for pod "kube-apiserver-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:07.713865  193726 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:08.041024  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:08.045389  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:25:08.051954  190882 api_server.go:141] control plane version: v1.28.3
	I1016 18:25:08.051978  190882 api_server.go:131] duration metric: took 3.511681872s to wait for apiserver health ...
	I1016 18:25:08.051989  190882 cni.go:84] Creating CNI manager for ""
	I1016 18:25:08.051996  190882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:08.053932  190882 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:25:08.055245  190882 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:25:08.059623  190882 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1016 18:25:08.059641  190882 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:25:08.081214  190882 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:25:08.749112  190882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:25:08.752953  190882 system_pods.go:59] 5 kube-system pods found
	I1016 18:25:08.753015  190882 system_pods.go:61] "etcd-running-upgrade-931818" [05a9d69d-201c-489e-a651-8b41095bf3ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:25:08.753029  190882 system_pods.go:61] "kube-apiserver-running-upgrade-931818" [65d52ac3-b222-4e58-98d4-0e6465ab7b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:25:08.753043  190882 system_pods.go:61] "kube-controller-manager-running-upgrade-931818" [317232d1-a273-407f-a951-2117bc0cf798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:25:08.753055  190882 system_pods.go:61] "kube-scheduler-running-upgrade-931818" [d9c70a27-d7af-46c2-8f38-24c78cced4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:25:08.753064  190882 system_pods.go:61] "storage-provisioner" [fad19060-9581-4c81-b943-050c7a438109] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1016 18:25:08.753078  190882 system_pods.go:74] duration metric: took 3.941583ms to wait for pod list to return data ...
	I1016 18:25:08.753091  190882 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:25:08.755817  190882 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:25:08.755843  190882 node_conditions.go:123] node cpu capacity is 8
	I1016 18:25:08.755862  190882 node_conditions.go:105] duration metric: took 2.764952ms to run NodePressure ...
	I1016 18:25:08.755913  190882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:08.923605  190882 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:25:08.931377  190882 ops.go:34] apiserver oom_adj: -16
	I1016 18:25:08.931399  190882 kubeadm.go:601] duration metric: took 16.701586362s to restartPrimaryControlPlane
	I1016 18:25:08.931419  190882 kubeadm.go:402] duration metric: took 16.768145266s to StartCluster
	I1016 18:25:08.931445  190882 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:08.931512  190882 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:08.933269  190882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:08.933596  190882 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:25:08.933641  190882 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:25:08.933772  190882 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-931818"
	I1016 18:25:08.933795  190882 addons.go:238] Setting addon storage-provisioner=true in "running-upgrade-931818"
	W1016 18:25:08.933804  190882 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:25:08.933811  190882 config.go:182] Loaded profile config "running-upgrade-931818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:25:08.933815  190882 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-931818"
	I1016 18:25:08.933833  190882 host.go:66] Checking if "running-upgrade-931818" exists ...
	I1016 18:25:08.933853  190882 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-931818"
	I1016 18:25:08.934171  190882 cli_runner.go:164] Run: docker container inspect running-upgrade-931818 --format={{.State.Status}}
	I1016 18:25:08.934364  190882 cli_runner.go:164] Run: docker container inspect running-upgrade-931818 --format={{.State.Status}}
	I1016 18:25:08.935228  190882 out.go:179] * Verifying Kubernetes components...
	I1016 18:25:08.936562  190882 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:08.958819  190882 kapi.go:59] client config for running-upgrade-931818: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/running-upgrade-931818/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/running-upgrade-931818/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:25:08.959325  190882 addons.go:238] Setting addon default-storageclass=true in "running-upgrade-931818"
	W1016 18:25:08.959360  190882 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:25:08.959400  190882 host.go:66] Checking if "running-upgrade-931818" exists ...
	I1016 18:25:08.960821  190882 cli_runner.go:164] Run: docker container inspect running-upgrade-931818 --format={{.State.Status}}
	I1016 18:25:08.961166  190882 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:25:05.523476  193038 cli_runner.go:164] Run: docker network inspect stopped-upgrade-637548 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:25:05.542507  193038 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1016 18:25:05.547588  193038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:25:05.561191  193038 kubeadm.go:883] updating cluster {Name:stopped-upgrade-637548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-637548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:25:05.561311  193038 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1016 18:25:05.561410  193038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:25:05.610845  193038 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:25:05.610869  193038 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:25:05.610911  193038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:25:05.660451  193038 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:25:05.660470  193038 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:25:05.660478  193038 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.28.3 crio true true} ...
	I1016 18:25:05.660562  193038 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=stopped-upgrade-637548 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-637548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:25:05.660623  193038 ssh_runner.go:195] Run: crio config
	I1016 18:25:05.725220  193038 cni.go:84] Creating CNI manager for ""
	I1016 18:25:05.725239  193038 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:05.725255  193038 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:25:05.725279  193038 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-637548 NodeName:stopped-upgrade-637548 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:25:05.725424  193038 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-637548"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:25:05.725491  193038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1016 18:25:05.736464  193038 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:25:05.736532  193038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:25:05.747889  193038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1016 18:25:05.769413  193038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:25:05.792352  193038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1016 18:25:05.815923  193038 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:25:05.819641  193038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:25:05.833583  193038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:05.917815  193038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:25:05.944298  193038 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548 for IP: 192.168.94.2
	I1016 18:25:05.944322  193038 certs.go:195] generating shared ca certs ...
	I1016 18:25:05.944346  193038 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:05.944502  193038 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:25:05.944558  193038 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:25:05.944569  193038 certs.go:257] generating profile certs ...
	I1016 18:25:05.944660  193038 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/client.key
	I1016 18:25:05.944691  193038 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key.b9b5b0f9
	I1016 18:25:05.944744  193038 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt.b9b5b0f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1016 18:25:06.155372  193038 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt.b9b5b0f9 ...
	I1016 18:25:06.155397  193038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt.b9b5b0f9: {Name:mk9935d9bd3b2acac11d240f61768b65f4907af4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:06.155561  193038 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key.b9b5b0f9 ...
	I1016 18:25:06.155580  193038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key.b9b5b0f9: {Name:mkb68e58e7d9c20141aec794a01d7d9d9fa854a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:06.155681  193038 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt.b9b5b0f9 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt
	I1016 18:25:06.155924  193038 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key.b9b5b0f9 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key
	I1016 18:25:06.156083  193038 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/proxy-client.key
	I1016 18:25:06.156201  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:25:06.156230  193038 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:25:06.156240  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:25:06.156260  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:25:06.156284  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:25:06.156307  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:25:06.156344  193038 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:25:06.156925  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:25:06.186149  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:25:06.213806  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:25:06.249695  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:25:06.292033  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1016 18:25:06.330312  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:25:06.359136  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:25:06.393303  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:25:06.434125  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:25:06.477162  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:25:06.510061  193038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:25:06.539042  193038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:25:06.560729  193038 ssh_runner.go:195] Run: openssl version
	I1016 18:25:06.566952  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:25:06.578147  193038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:25:06.582264  193038 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:25:06.582329  193038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:25:06.590132  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:25:06.600231  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:25:06.616350  193038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:25:06.620517  193038 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:25:06.620570  193038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:25:06.627507  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:25:06.637434  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:25:06.648026  193038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:06.652249  193038 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:06.652306  193038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:25:06.659562  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:25:06.669174  193038 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:25:06.673197  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:25:06.680350  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:25:06.687943  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:25:06.694746  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:25:06.703038  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:25:06.710872  193038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:25:06.719237  193038 kubeadm.go:400] StartCluster: {Name:stopped-upgrade-637548 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-637548 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:25:06.719322  193038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:25:06.719389  193038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:25:06.761292  193038 cri.go:89] found id: ""
	I1016 18:25:06.761370  193038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1016 18:25:06.771270  193038 kubeadm.go:413] apiserver tunnel failed: apiserver port not set
	I1016 18:25:06.771292  193038 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:25:06.771298  193038 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:25:06.771346  193038 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:25:06.780852  193038 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:06.781934  193038 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-637548" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:06.782659  193038 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-637548" cluster setting kubeconfig missing "stopped-upgrade-637548" context setting]
	I1016 18:25:06.783619  193038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:06.784455  193038 kapi.go:59] client config for stopped-upgrade-637548: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:25:06.785043  193038 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:25:06.785060  193038 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:25:06.785064  193038 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:25:06.785068  193038 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:25:06.785071  193038 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:25:06.785435  193038 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:25:06.795563  193038 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-16 18:24:35.356910896 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-16 18:25:05.813018653 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1016 18:25:06.795579  193038 kubeadm.go:1160] stopping kube-system containers ...
	I1016 18:25:06.795590  193038 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1016 18:25:06.795639  193038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:25:06.835395  193038 cri.go:89] found id: ""
	I1016 18:25:06.835460  193038 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:25:06.849935  193038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:25:06.861654  193038 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 16 18:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 16 18:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 16 18:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 16 18:24 /etc/kubernetes/scheduler.conf
	
	I1016 18:25:06.861744  193038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1016 18:25:06.873611  193038 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:06.873659  193038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:25:06.884522  193038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1016 18:25:06.896777  193038 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:06.896840  193038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:25:06.908000  193038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1016 18:25:06.919768  193038 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:06.919840  193038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:25:06.930455  193038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1016 18:25:06.942092  193038 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:25:06.942151  193038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:25:06.954213  193038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:25:06.966161  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:07.035300  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:07.744502  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:07.884023  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:07.945986  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:08.004457  193038 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:25:08.004524  193038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:08.505843  193038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:09.005526  193038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:09.023447  193038 api_server.go:72] duration metric: took 1.018992755s to wait for apiserver process to appear ...
	I1016 18:25:09.023473  193038 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:25:09.023569  193038 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:25:08.098325  193726 pod_ready.go:94] pod "kube-controller-manager-pause-388667" is "Ready"
	I1016 18:25:08.098365  193726 pod_ready.go:86] duration metric: took 384.46973ms for pod "kube-controller-manager-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:08.298224  193726 pod_ready.go:83] waiting for pod "kube-proxy-bkkgz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:08.697547  193726 pod_ready.go:94] pod "kube-proxy-bkkgz" is "Ready"
	I1016 18:25:08.697577  193726 pod_ready.go:86] duration metric: took 399.332028ms for pod "kube-proxy-bkkgz" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:08.898832  193726 pod_ready.go:83] waiting for pod "kube-scheduler-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:09.298375  193726 pod_ready.go:94] pod "kube-scheduler-pause-388667" is "Ready"
	I1016 18:25:09.298406  193726 pod_ready.go:86] duration metric: took 399.550723ms for pod "kube-scheduler-pause-388667" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:25:09.298420  193726 pod_ready.go:40] duration metric: took 1.60421996s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:25:09.351126  193726 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:25:09.353434  193726 out.go:179] * Done! kubectl is now configured to use "pause-388667" cluster and "default" namespace by default
	I1016 18:25:08.962563  190882 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:25:08.962581  190882 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:25:08.962630  190882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-931818
	I1016 18:25:08.991481  190882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/running-upgrade-931818/id_rsa Username:docker}
	I1016 18:25:08.993394  190882 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:25:08.993415  190882 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:25:08.993467  190882 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-931818
	I1016 18:25:09.018622  190882 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32983 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/running-upgrade-931818/id_rsa Username:docker}
	I1016 18:25:09.075540  190882 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:25:09.090838  190882 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:25:09.090914  190882 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:25:09.103916  190882 api_server.go:72] duration metric: took 170.287292ms to wait for apiserver process to appear ...
	I1016 18:25:09.103946  190882 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:25:09.103966  190882 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:25:09.107614  190882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:25:09.110622  190882 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:25:09.111805  190882 api_server.go:141] control plane version: v1.28.3
	I1016 18:25:09.111835  190882 api_server.go:131] duration metric: took 7.880291ms to wait for apiserver health ...
	I1016 18:25:09.111845  190882 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:25:09.116761  190882 system_pods.go:59] 5 kube-system pods found
	I1016 18:25:09.116787  190882 system_pods.go:61] "etcd-running-upgrade-931818" [05a9d69d-201c-489e-a651-8b41095bf3ea] Running
	I1016 18:25:09.116801  190882 system_pods.go:61] "kube-apiserver-running-upgrade-931818" [65d52ac3-b222-4e58-98d4-0e6465ab7b70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:25:09.116813  190882 system_pods.go:61] "kube-controller-manager-running-upgrade-931818" [317232d1-a273-407f-a951-2117bc0cf798] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:25:09.116828  190882 system_pods.go:61] "kube-scheduler-running-upgrade-931818" [d9c70a27-d7af-46c2-8f38-24c78cced4f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:25:09.116838  190882 system_pods.go:61] "storage-provisioner" [fad19060-9581-4c81-b943-050c7a438109] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1016 18:25:09.116847  190882 system_pods.go:74] duration metric: took 4.994688ms to wait for pod list to return data ...
	I1016 18:25:09.116863  190882 kubeadm.go:586] duration metric: took 183.236985ms to wait for: map[apiserver:true system_pods:true]
	I1016 18:25:09.116886  190882 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:25:09.119985  190882 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:25:09.120010  190882 node_conditions.go:123] node cpu capacity is 8
	I1016 18:25:09.120025  190882 node_conditions.go:105] duration metric: took 3.13422ms to run NodePressure ...
	I1016 18:25:09.120070  190882 start.go:241] waiting for startup goroutines ...
	I1016 18:25:09.130402  190882 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:25:09.492303  190882 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:25:09.493590  190882 addons.go:514] duration metric: took 559.956003ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:25:09.493630  190882 start.go:246] waiting for cluster config update ...
	I1016 18:25:09.493644  190882 start.go:255] writing updated cluster config ...
	I1016 18:25:09.493930  190882 ssh_runner.go:195] Run: rm -f paused
	I1016 18:25:09.554868  190882 start.go:624] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1016 18:25:09.556400  190882 out.go:203] 
	W1016 18:25:09.561870  190882 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1016 18:25:09.563114  190882 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1016 18:25:09.564402  190882 out.go:179] * Done! kubectl is now configured to use "running-upgrade-931818" cluster and "default" namespace by default
	W1016 18:25:08.068557  180947 node_ready.go:57] node "offline-crio-747718" has "Ready":"False" status (will retry)
	W1016 18:25:10.568117  180947 node_ready.go:57] node "offline-crio-747718" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.1164564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.11732993Z" level=info msg="Conmon does support the --sync option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.117350747Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.117364721Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.118125357Z" level=info msg="Conmon does support the --sync option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.118145001Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.122416443Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.122442182Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123047197Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123402418Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123456476Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.129750491Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.181310718Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-x5rl8 Namespace:kube-system ID:3092e0ca9cc3d1c96b4399fabe7dc26437d37e3d9d4de4263e5c53c6871b5832 UID:b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1 NetNS:/var/run/netns/d9a8e4a8-8ee2-4ddc-b557-969126898f5b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00017e478}] Aliases:map[]}"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.18157045Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-x5rl8 for CNI network kindnet (type=ptp)"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182188605Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182213632Z" level=info msg="Starting seccomp notifier watcher"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182260258Z" level=info msg="Create NRI interface"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182363327Z" level=info msg="built-in NRI default validator is disabled"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182373792Z" level=info msg="runtime interface created"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182385372Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182391786Z" level=info msg="runtime interface starting up..."
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182396858Z" level=info msg="starting plugins..."
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182408576Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.183018144Z" level=info msg="No systemd watchdog enabled"
	Oct 16 18:25:06 pause-388667 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d238953b63b95       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   3092e0ca9cc3d       coredns-66bc5c9577-x5rl8               kube-system
	1a0355b809c89       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   23f0463148620       kindnet-bk5tb                          kube-system
	7446dabe8d3d0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago      Running             kube-proxy                0                   d1eee11567e88       kube-proxy-bkkgz                       kube-system
	7f40bb35cc90d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   06d32a7cead1c       kube-controller-manager-pause-388667   kube-system
	ae63c3e46eefb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   13a1c055f18cc       kube-scheduler-pause-388667            kube-system
	956dd8019a90f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   238653abcf28f       kube-apiserver-pause-388667            kube-system
	735b1613d4b31       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   885c88c37ab15       etcd-pause-388667                      kube-system
	
	
	==> coredns [d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44113 - 41000 "HINFO IN 366144560738812185.3280832218681968935. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.417842768s
	
	
	==> describe nodes <==
	Name:               pause-388667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-388667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=pause-388667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:24:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-388667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:25:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-388667
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                b2bf5659-f83c-4407-bec1-c119a89bc7b4
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-x5rl8                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-388667                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-bk5tb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-388667             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-pause-388667    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-bkkgz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-388667             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node pause-388667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node pause-388667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node pause-388667 status is now: NodeHasSufficientPID
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node pause-388667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node pause-388667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node pause-388667 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node pause-388667 event: Registered Node pause-388667 in Controller
	  Normal  NodeReady                12s                kubelet          Node pause-388667 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac] <==
	{"level":"warn","ts":"2025-10-16T18:24:40.269017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.276654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.289026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.297153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.304863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.315163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.324643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.334874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.343842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.353879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.363772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.382428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.408187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.429318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.435118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.444667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.459707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.470037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.484980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.502401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.514186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.532054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.542455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.552558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.647602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:25:12 up  1:07,  0 user,  load average: 5.02, 2.09, 1.27
	Linux pause-388667 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42] <==
	I1016 18:24:50.001896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:24:50.002192       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 18:24:50.002327       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:24:50.002340       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:24:50.002361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:24:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:24:50.298218       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:24:50.298255       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:24:50.298267       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:24:50.298530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:24:50.698430       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:24:50.698462       1 metrics.go:72] Registering metrics
	I1016 18:24:50.698547       1 controller.go:711] "Syncing nftables rules"
	I1016 18:25:00.248794       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 18:25:00.248870       1 main.go:301] handling current node
	I1016 18:25:10.253801       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 18:25:10.253834       1 main.go:301] handling current node
	
	
	==> kube-apiserver [956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d] <==
	I1016 18:24:41.600673       1 controller.go:667] quota admission added evaluator for: namespaces
	E1016 18:24:41.604366       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1016 18:24:41.607297       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:41.607453       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:24:41.614155       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:41.614533       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:24:41.614727       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:24:41.807430       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:24:42.400077       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:24:42.404182       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:24:42.404199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:24:43.014279       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:24:43.063325       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:24:43.204884       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:24:43.211399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1016 18:24:43.212992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:24:43.218409       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:24:43.439861       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:24:44.170159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:24:44.184937       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:24:44.195776       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:24:49.091257       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:49.096158       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:49.139948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:24:49.438953       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a] <==
	I1016 18:24:48.418199       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:24:48.436034       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:24:48.436211       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:24:48.436255       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:24:48.436161       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:24:48.437492       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:24:48.437539       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:24:48.437896       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:24:48.438081       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 18:24:48.438196       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:24:48.438255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:24:48.438401       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:24:48.438439       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:24:48.439617       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:24:48.441591       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:24:48.441651       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:24:48.443790       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:24:48.444963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:24:48.445164       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:24:48.447225       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:24:48.452057       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:24:48.453021       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 18:24:48.460190       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:24:48.469772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:25:03.391159       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12] <==
	I1016 18:24:49.855068       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:24:49.912522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:24:50.013143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:24:50.013175       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 18:24:50.013279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:24:50.033759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:24:50.033837       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:24:50.039464       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:24:50.039892       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:24:50.039932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:24:50.041574       1 config.go:200] "Starting service config controller"
	I1016 18:24:50.041603       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:24:50.041697       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:24:50.043092       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:24:50.043278       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:24:50.041757       1 config.go:309] "Starting node config controller"
	I1016 18:24:50.043801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:24:50.043810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:24:50.044080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:24:50.141866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:24:50.143884       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:24:50.145172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716] <==
	E1016 18:24:41.529207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:24:41.529357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:24:41.529428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:24:41.529476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:24:41.529529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:24:41.533186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:24:41.533338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:24:41.533589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:24:41.533624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:24:41.533667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:24:41.533699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:24:41.533763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:24:41.533809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:24:41.533859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:24:41.534111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:24:41.534176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:24:41.534495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:24:41.534599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:24:42.374475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:24:42.401782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 18:24:42.592399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:24:42.634850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:24:42.681093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:24:42.787443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1016 18:24:45.218627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546848    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5z5\" (UniqueName: \"kubernetes.io/projected/72d1a38b-8257-40f2-9d37-c3167d464bbf-kube-api-access-mt5z5\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546869    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-cni-cfg\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546886    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-xtables-lock\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546918    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-lib-modules\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546942    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-kube-proxy\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546963    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-xtables-lock\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546989    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twz62\" (UniqueName: \"kubernetes.io/projected/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-kube-api-access-twz62\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:50 pause-388667 kubelet[1293]: I1016 18:24:50.081023    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bk5tb" podStartSLOduration=1.081004001 podStartE2EDuration="1.081004001s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:24:50.080921131 +0000 UTC m=+6.155771216" watchObservedRunningTime="2025-10-16 18:24:50.081004001 +0000 UTC m=+6.155854086"
	Oct 16 18:24:50 pause-388667 kubelet[1293]: I1016 18:24:50.091859    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkkgz" podStartSLOduration=1.091836454 podStartE2EDuration="1.091836454s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:24:50.091696659 +0000 UTC m=+6.166546744" watchObservedRunningTime="2025-10-16 18:24:50.091836454 +0000 UTC m=+6.166686538"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.525433    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.633597    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4l9\" (UniqueName: \"kubernetes.io/projected/b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1-kube-api-access-kb4l9\") pod \"coredns-66bc5c9577-x5rl8\" (UID: \"b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1\") " pod="kube-system/coredns-66bc5c9577-x5rl8"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.633635    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1-config-volume\") pod \"coredns-66bc5c9577-x5rl8\" (UID: \"b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1\") " pod="kube-system/coredns-66bc5c9577-x5rl8"
	Oct 16 18:25:01 pause-388667 kubelet[1293]: I1016 18:25:01.119113    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x5rl8" podStartSLOduration=12.119092633 podStartE2EDuration="12.119092633s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:25:01.108509419 +0000 UTC m=+17.183359503" watchObservedRunningTime="2025-10-16 18:25:01.119092633 +0000 UTC m=+17.193942718"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: W1016 18:25:06.045012    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045108    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045218    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045241    1293 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045265    1293 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107699    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107785    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107805    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:09 pause-388667 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:25:09 pause-388667 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:25:09 pause-388667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:25:09 pause-388667 systemd[1]: kubelet.service: Consumed 1.146s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-388667 -n pause-388667
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-388667 -n pause-388667: exit status 2 (383.499381ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-388667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-388667
helpers_test.go:243: (dbg) docker inspect pause-388667:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c",
	        "Created": "2025-10-16T18:24:20.930514628Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182334,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:24:21.528544325Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/hostname",
	        "HostsPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/hosts",
	        "LogPath": "/var/lib/docker/containers/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c/644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c-json.log",
	        "Name": "/pause-388667",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-388667:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-388667",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "644aa94f4c501b614c3d1b25e524cb3d3921780c537332bf7036419bf11bb71c",
	                "LowerDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cf10175c665507f98aeb5012d17d02d31a94c80eaafa6c7c3a4d2c326007a32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-388667",
	                "Source": "/var/lib/docker/volumes/pause-388667/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-388667",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-388667",
	                "name.minikube.sigs.k8s.io": "pause-388667",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "783f075fb56be21ca1578f3f61c29d188acfbb86f5a4678f6521cce2c369cfd1",
	            "SandboxKey": "/var/run/docker/netns/783f075fb56b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-388667": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:b7:eb:7e:85:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5cbd667fa9fcdf372acfc559c4f9c9dd391eb2d089d36e1da313b7a5613f9ea3",
	                    "EndpointID": "52ba65efcb23f1e14c6c78240a39465374b595356e9a7e752ee7d609f612d3b2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-388667",
	                        "644aa94f4c50"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-388667 -n pause-388667
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-388667 -n pause-388667: exit status 2 (365.439916ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-388667 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-388667 logs -n 25: (1.095571813s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 5m                                                                            │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --cancel-scheduled                                                                       │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:22 UTC │ 16 Oct 25 18:22 UTC │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ stop    │ -p scheduled-stop-946894 --schedule 15s                                                                           │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ delete  │ -p scheduled-stop-946894                                                                                          │ scheduled-stop-946894       │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:23 UTC │
	│ start   │ -p insufficient-storage-114513 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-114513 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │                     │
	│ delete  │ -p insufficient-storage-114513                                                                                    │ insufficient-storage-114513 │ jenkins │ v1.37.0 │ 16 Oct 25 18:23 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p offline-crio-747718 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-747718         │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │                     │
	│ start   │ -p pause-388667 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p stopped-upgrade-637548 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-637548      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p running-upgrade-931818 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-931818      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ stop    │ stopped-upgrade-637548 stop                                                                                       │ stopped-upgrade-637548      │ jenkins │ v1.32.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:24 UTC │
	│ start   │ -p running-upgrade-931818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ running-upgrade-931818      │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p stopped-upgrade-637548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio          │ stopped-upgrade-637548      │ jenkins │ v1.37.0 │ 16 Oct 25 18:24 UTC │                     │
	│ start   │ -p pause-388667 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ pause   │ -p pause-388667 --alsologtostderr -v=5                                                                            │ pause-388667                │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │                     │
	│ delete  │ -p running-upgrade-931818                                                                                         │ running-upgrade-931818      │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p NoKubernetes-200573 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio     │ NoKubernetes-200573         │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │                     │
	│ start   │ -p NoKubernetes-200573 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio             │ NoKubernetes-200573         │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:25:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:25:12.248596  196998 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:25:12.248889  196998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:12.248898  196998 out.go:374] Setting ErrFile to fd 2...
	I1016 18:25:12.248903  196998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:12.249156  196998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:25:12.249613  196998 out.go:368] Setting JSON to false
	I1016 18:25:12.250696  196998 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4060,"bootTime":1760635052,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:25:12.250804  196998 start.go:141] virtualization: kvm guest
	I1016 18:25:12.252412  196998 out.go:179] * [NoKubernetes-200573] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:25:12.253588  196998 notify.go:220] Checking for updates...
	I1016 18:25:12.253613  196998 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:25:12.254806  196998 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:25:12.256183  196998 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:12.257385  196998 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:25:12.258735  196998 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:25:12.259975  196998 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:25:12.261706  196998 config.go:182] Loaded profile config "offline-crio-747718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:12.261852  196998 config.go:182] Loaded profile config "pause-388667": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:12.261924  196998 config.go:182] Loaded profile config "stopped-upgrade-637548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:25:12.262030  196998 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:25:12.286321  196998 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:25:12.286409  196998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:12.348272  196998 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:25:12.336385224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:12.348433  196998 docker.go:318] overlay module found
	I1016 18:25:12.350169  196998 out.go:179] * Using the docker driver based on user configuration
	I1016 18:25:12.351188  196998 start.go:305] selected driver: docker
	I1016 18:25:12.351205  196998 start.go:925] validating driver "docker" against <nil>
	I1016 18:25:12.351221  196998 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:25:12.352007  196998 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:12.419829  196998 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:25:12.407540515 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:12.420059  196998 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:25:12.420350  196998 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 18:25:12.422054  196998 out.go:179] * Using Docker driver with root privileges
	I1016 18:25:12.423410  196998 cni.go:84] Creating CNI manager for ""
	I1016 18:25:12.423497  196998 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:12.423512  196998 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:25:12.423605  196998 start.go:349] cluster config:
	{Name:NoKubernetes-200573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-200573 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:25:12.425301  196998 out.go:179] * Starting "NoKubernetes-200573" primary control-plane node in "NoKubernetes-200573" cluster
	I1016 18:25:12.426674  196998 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:25:12.428320  196998 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:25:12.429527  196998 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:25:12.429588  196998 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:25:12.429601  196998 cache.go:58] Caching tarball of preloaded images
	I1016 18:25:12.429627  196998 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:25:12.429795  196998 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:25:12.429818  196998 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:25:12.429928  196998 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/NoKubernetes-200573/config.json ...
	I1016 18:25:12.430023  196998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/NoKubernetes-200573/config.json: {Name:mkf569192e520b8f6f4b6a2659f3189260719af6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:12.456909  196998 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:25:12.456932  196998 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:25:12.456950  196998 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:25:12.456986  196998 start.go:360] acquireMachinesLock for NoKubernetes-200573: {Name:mkb71ebe5a919ece29c4e7ac244c42f490c9b751 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:25:12.457099  196998 start.go:364] duration metric: took 92.062µs to acquireMachinesLock for "NoKubernetes-200573"
	I1016 18:25:12.457127  196998 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-200573 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-200573 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:25:12.457225  196998 start.go:125] createHost starting for "" (driver="docker")
	I1016 18:25:11.562143  193038 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:25:11.562173  193038 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:25:11.562198  193038 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:25:11.583761  193038 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:25:11.583789  193038 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:25:12.024410  193038 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:25:12.029399  193038 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:25:12.029427  193038 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:25:12.523822  193038 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:25:12.528666  193038 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:25:12.528703  193038 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:25:13.024384  193038 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:25:13.028751  193038 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:25:13.037622  193038 api_server.go:141] control plane version: v1.28.3
	I1016 18:25:13.037653  193038 api_server.go:131] duration metric: took 4.014172023s to wait for apiserver health ...
	I1016 18:25:13.037664  193038 cni.go:84] Creating CNI manager for ""
	I1016 18:25:13.037671  193038 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:25:13.040475  193038 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:25:13.042157  193038 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:25:13.046565  193038 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1016 18:25:13.046586  193038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:25:13.075791  193038 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:25:13.854070  193038 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:25:13.857629  193038 system_pods.go:59] 5 kube-system pods found
	I1016 18:25:13.857667  193038 system_pods.go:61] "etcd-stopped-upgrade-637548" [b80ef994-fc1e-430c-9601-a4a40c7f62c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:25:13.857680  193038 system_pods.go:61] "kube-apiserver-stopped-upgrade-637548" [bf74618f-6df8-48cc-9cf3-f00cfe76b4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:25:13.857693  193038 system_pods.go:61] "kube-controller-manager-stopped-upgrade-637548" [f34ca09d-1207-4afb-b274-5a1d1d8ff75b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:25:13.857707  193038 system_pods.go:61] "kube-scheduler-stopped-upgrade-637548" [a19a5bd7-207d-4d8a-a074-8a45338d9226] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:25:13.857729  193038 system_pods.go:61] "storage-provisioner" [646a1bce-fada-417d-9e45-099a55cd8e6f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1016 18:25:13.857737  193038 system_pods.go:74] duration metric: took 3.648607ms to wait for pod list to return data ...
	I1016 18:25:13.857747  193038 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:25:13.860112  193038 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:25:13.860135  193038 node_conditions.go:123] node cpu capacity is 8
	I1016 18:25:13.860145  193038 node_conditions.go:105] duration metric: took 2.393966ms to run NodePressure ...
	I1016 18:25:13.860192  193038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:25:14.046306  193038 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:25:14.055813  193038 ops.go:34] apiserver oom_adj: -16
	I1016 18:25:14.055836  193038 kubeadm.go:601] duration metric: took 7.284530729s to restartPrimaryControlPlane
	I1016 18:25:14.055847  193038 kubeadm.go:402] duration metric: took 7.336620868s to StartCluster
	I1016 18:25:14.055867  193038 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:14.055943  193038 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:14.057372  193038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:25:14.057624  193038 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:25:14.057745  193038 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:25:14.057837  193038 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-637548"
	I1016 18:25:14.057858  193038 addons.go:238] Setting addon storage-provisioner=true in "stopped-upgrade-637548"
	W1016 18:25:14.057865  193038 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:25:14.057868  193038 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-637548"
	I1016 18:25:14.057890  193038 config.go:182] Loaded profile config "stopped-upgrade-637548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:25:14.057892  193038 host.go:66] Checking if "stopped-upgrade-637548" exists ...
	I1016 18:25:14.057897  193038 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-637548"
	I1016 18:25:14.058270  193038 cli_runner.go:164] Run: docker container inspect stopped-upgrade-637548 --format={{.State.Status}}
	I1016 18:25:14.058349  193038 cli_runner.go:164] Run: docker container inspect stopped-upgrade-637548 --format={{.State.Status}}
	I1016 18:25:14.060473  193038 out.go:179] * Verifying Kubernetes components...
	I1016 18:25:14.062168  193038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:25:14.081869  193038 kapi.go:59] client config for stopped-upgrade-637548: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/stopped-upgrade-637548/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:25:14.082210  193038 addons.go:238] Setting addon default-storageclass=true in "stopped-upgrade-637548"
	W1016 18:25:14.082231  193038 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:25:14.082259  193038 host.go:66] Checking if "stopped-upgrade-637548" exists ...
	I1016 18:25:14.082740  193038 cli_runner.go:164] Run: docker container inspect stopped-upgrade-637548 --format={{.State.Status}}
	I1016 18:25:14.083530  193038 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.1164564Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.11732993Z" level=info msg="Conmon does support the --sync option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.117350747Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.117364721Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.118125357Z" level=info msg="Conmon does support the --sync option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.118145001Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.122416443Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.122442182Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123047197Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123402418Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.123456476Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.129750491Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.181310718Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-x5rl8 Namespace:kube-system ID:3092e0ca9cc3d1c96b4399fabe7dc26437d37e3d9d4de4263e5c53c6871b5832 UID:b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1 NetNS:/var/run/netns/d9a8e4a8-8ee2-4ddc-b557-969126898f5b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00017e478}] Aliases:map[]}"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.18157045Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-x5rl8 for CNI network kindnet (type=ptp)"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182188605Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182213632Z" level=info msg="Starting seccomp notifier watcher"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182260258Z" level=info msg="Create NRI interface"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182363327Z" level=info msg="built-in NRI default validator is disabled"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182373792Z" level=info msg="runtime interface created"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182385372Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182391786Z" level=info msg="runtime interface starting up..."
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182396858Z" level=info msg="starting plugins..."
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.182408576Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 16 18:25:06 pause-388667 crio[2123]: time="2025-10-16T18:25:06.183018144Z" level=info msg="No systemd watchdog enabled"
	Oct 16 18:25:06 pause-388667 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	d238953b63b95       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   3092e0ca9cc3d       coredns-66bc5c9577-x5rl8               kube-system
	1a0355b809c89       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   23f0463148620       kindnet-bk5tb                          kube-system
	7446dabe8d3d0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   d1eee11567e88       kube-proxy-bkkgz                       kube-system
	7f40bb35cc90d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   06d32a7cead1c       kube-controller-manager-pause-388667   kube-system
	ae63c3e46eefb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   13a1c055f18cc       kube-scheduler-pause-388667            kube-system
	956dd8019a90f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   238653abcf28f       kube-apiserver-pause-388667            kube-system
	735b1613d4b31       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   885c88c37ab15       etcd-pause-388667                      kube-system
	
	
	==> coredns [d238953b63b95d0adb614cede346e952ca5e2536f0f1729e27d979eebaf9564e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44113 - 41000 "HINFO IN 366144560738812185.3280832218681968935. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.417842768s
	
	
	==> describe nodes <==
	Name:               pause-388667
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-388667
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=pause-388667
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_24_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:24:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-388667
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:25:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:24:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:25:00 +0000   Thu, 16 Oct 2025 18:25:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-388667
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                b2bf5659-f83c-4407-bec1-c119a89bc7b4
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-x5rl8                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-388667                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-bk5tb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-388667             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-388667    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-bkkgz                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-388667             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node pause-388667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node pause-388667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node pause-388667 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node pause-388667 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node pause-388667 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node pause-388667 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-388667 event: Registered Node pause-388667 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-388667 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [735b1613d4b31ff3f75bea5e311720a3f9b35a809467bd1b306166b4ab7391ac] <==
	{"level":"warn","ts":"2025-10-16T18:24:40.269017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.276654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.289026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.297153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.304863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.315163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.324643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.334874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.343842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.353879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.363772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.382428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.408187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.429318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.435118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.444667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.459707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.470037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.484980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.502401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.514186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.532054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.542455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.552558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:24:40.647602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41434","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:25:14 up  1:07,  0 user,  load average: 5.10, 2.16, 1.30
	Linux pause-388667 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a0355b809c89c9e7fc817076165843c4af93b1e901010a8b87b8a9d65759c42] <==
	I1016 18:24:50.001896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:24:50.002192       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1016 18:24:50.002327       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:24:50.002340       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:24:50.002361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:24:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:24:50.298218       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:24:50.298255       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:24:50.298267       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:24:50.298530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:24:50.698430       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:24:50.698462       1 metrics.go:72] Registering metrics
	I1016 18:24:50.698547       1 controller.go:711] "Syncing nftables rules"
	I1016 18:25:00.248794       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 18:25:00.248870       1 main.go:301] handling current node
	I1016 18:25:10.253801       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1016 18:25:10.253834       1 main.go:301] handling current node
	
	
	==> kube-apiserver [956dd8019a90f588a5ce2079ef107e07c66ee155b4ffedfe9c3f268c4c18fc2d] <==
	I1016 18:24:41.600673       1 controller.go:667] quota admission added evaluator for: namespaces
	E1016 18:24:41.604366       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1016 18:24:41.607297       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:41.607453       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:24:41.614155       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:41.614533       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:24:41.614727       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:24:41.807430       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:24:42.400077       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:24:42.404182       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:24:42.404199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:24:43.014279       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:24:43.063325       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:24:43.204884       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:24:43.211399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1016 18:24:43.212992       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:24:43.218409       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:24:43.439861       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:24:44.170159       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:24:44.184937       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:24:44.195776       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:24:49.091257       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:49.096158       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:24:49.139948       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:24:49.438953       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7f40bb35cc90d6a6b501433a822a537de59cf6b3e0f0b0e02bd6ed60cc9d345a] <==
	I1016 18:24:48.418199       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:24:48.436034       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:24:48.436211       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:24:48.436255       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:24:48.436161       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:24:48.437492       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:24:48.437539       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:24:48.437896       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:24:48.438081       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 18:24:48.438196       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:24:48.438255       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:24:48.438401       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:24:48.438439       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:24:48.439617       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:24:48.441591       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:24:48.441651       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:24:48.443790       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:24:48.444963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:24:48.445164       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:24:48.447225       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:24:48.452057       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:24:48.453021       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 18:24:48.460190       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:24:48.469772       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:25:03.391159       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7446dabe8d3d0742c1525a6d8174594e5ba5ca08c2637968eb687cb7e6a14e12] <==
	I1016 18:24:49.855068       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:24:49.912522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:24:50.013143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:24:50.013175       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1016 18:24:50.013279       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:24:50.033759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:24:50.033837       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:24:50.039464       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:24:50.039892       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:24:50.039932       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:24:50.041574       1 config.go:200] "Starting service config controller"
	I1016 18:24:50.041603       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:24:50.041697       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:24:50.043092       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:24:50.043278       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:24:50.041757       1 config.go:309] "Starting node config controller"
	I1016 18:24:50.043801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:24:50.043810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:24:50.044080       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:24:50.141866       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:24:50.143884       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:24:50.145172       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ae63c3e46eefb8fd7b28cc9c7ab67cacb5b6660e6a4cdaeac8fe16256cc78716] <==
	E1016 18:24:41.529207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:24:41.529357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:24:41.529428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:24:41.529476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:24:41.529529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:24:41.533186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:24:41.533338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:24:41.533589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:24:41.533624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:24:41.533667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:24:41.533699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:24:41.533763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:24:41.533809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:24:41.533859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:24:41.534111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:24:41.534176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:24:41.534495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:24:41.534599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:24:42.374475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:24:42.401782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 18:24:42.592399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:24:42.634850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:24:42.681093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:24:42.787443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1016 18:24:45.218627       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546848    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5z5\" (UniqueName: \"kubernetes.io/projected/72d1a38b-8257-40f2-9d37-c3167d464bbf-kube-api-access-mt5z5\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546869    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-cni-cfg\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546886    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-xtables-lock\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546918    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72d1a38b-8257-40f2-9d37-c3167d464bbf-lib-modules\") pod \"kindnet-bk5tb\" (UID: \"72d1a38b-8257-40f2-9d37-c3167d464bbf\") " pod="kube-system/kindnet-bk5tb"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546942    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-kube-proxy\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546963    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-xtables-lock\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:49 pause-388667 kubelet[1293]: I1016 18:24:49.546989    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twz62\" (UniqueName: \"kubernetes.io/projected/34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d-kube-api-access-twz62\") pod \"kube-proxy-bkkgz\" (UID: \"34aa43df-45b5-45b5-96f8-1b1d1dd4bf3d\") " pod="kube-system/kube-proxy-bkkgz"
	Oct 16 18:24:50 pause-388667 kubelet[1293]: I1016 18:24:50.081023    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bk5tb" podStartSLOduration=1.081004001 podStartE2EDuration="1.081004001s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:24:50.080921131 +0000 UTC m=+6.155771216" watchObservedRunningTime="2025-10-16 18:24:50.081004001 +0000 UTC m=+6.155854086"
	Oct 16 18:24:50 pause-388667 kubelet[1293]: I1016 18:24:50.091859    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkkgz" podStartSLOduration=1.091836454 podStartE2EDuration="1.091836454s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:24:50.091696659 +0000 UTC m=+6.166546744" watchObservedRunningTime="2025-10-16 18:24:50.091836454 +0000 UTC m=+6.166686538"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.525433    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.633597    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4l9\" (UniqueName: \"kubernetes.io/projected/b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1-kube-api-access-kb4l9\") pod \"coredns-66bc5c9577-x5rl8\" (UID: \"b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1\") " pod="kube-system/coredns-66bc5c9577-x5rl8"
	Oct 16 18:25:00 pause-388667 kubelet[1293]: I1016 18:25:00.633635    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1-config-volume\") pod \"coredns-66bc5c9577-x5rl8\" (UID: \"b0eca4a0-3dbd-4ffd-bade-48b478c8d7a1\") " pod="kube-system/coredns-66bc5c9577-x5rl8"
	Oct 16 18:25:01 pause-388667 kubelet[1293]: I1016 18:25:01.119113    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x5rl8" podStartSLOduration=12.119092633 podStartE2EDuration="12.119092633s" podCreationTimestamp="2025-10-16 18:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:25:01.108509419 +0000 UTC m=+17.183359503" watchObservedRunningTime="2025-10-16 18:25:01.119092633 +0000 UTC m=+17.193942718"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: W1016 18:25:06.045012    1293 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045108    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045218    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045241    1293 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.045265    1293 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107699    1293 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107785    1293 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:06 pause-388667 kubelet[1293]: E1016 18:25:06.107805    1293 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 16 18:25:09 pause-388667 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:25:09 pause-388667 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:25:09 pause-388667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:25:09 pause-388667 systemd[1]: kubelet.service: Consumed 1.146s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-388667 -n pause-388667
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-388667 -n pause-388667: exit status 2 (336.186211ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-388667 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (246.569894ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:27:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-956814 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-956814 describe deploy/metrics-server -n kube-system: exit status 1 (64.187909ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-956814 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-956814
helpers_test.go:243: (dbg) docker inspect old-k8s-version-956814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	        "Created": "2025-10-16T18:26:24.391336039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:26:24.436167069Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hosts",
	        "LogPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d-json.log",
	        "Name": "/old-k8s-version-956814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-956814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-956814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	                "LowerDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-956814",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-956814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-956814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "700cc8e96ac6243569d4b7569e9d7ef87f970da8b18bb317784aaf6c5ea056f7",
	            "SandboxKey": "/var/run/docker/netns/700cc8e96ac6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-956814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:ea:8b:ff:54:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1d700daadff6f62e8b6f47bfafd5296def1ddd0bdc304135db2dbcfd26dcae3",
	                    "EndpointID": "8710d8d3dfd29d72da3105d873597f4f8c0f1db8ba5c2b58545d43fef178f2be",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-956814",
	                        "2fe013b2be52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25: (1.217869449s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p force-systemd-flag-607466 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-607466 │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-489554    │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ delete  │ -p force-systemd-env-275318                                                                                                                                                                                                                   │ force-systemd-env-275318  │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p cert-options-817096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ ssh     │ force-systemd-flag-607466 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-607466 │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p force-systemd-flag-607466                                                                                                                                                                                                                  │ force-systemd-flag-607466 │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-294813    │ jenkins │ v1.32.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ stop    │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p NoKubernetes-200573 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ cert-options-817096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p cert-options-817096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p cert-options-817096                                                                                                                                                                                                                        │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:26:39
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:26:39.061672  228782 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:26:39.062032  228782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:26:39.062046  228782 out.go:374] Setting ErrFile to fd 2...
	I1016 18:26:39.062052  228782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:26:39.062334  228782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:26:39.062969  228782 out.go:368] Setting JSON to false
	I1016 18:26:39.064483  228782 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4147,"bootTime":1760635052,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:26:39.064607  228782 start.go:141] virtualization: kvm guest
	I1016 18:26:39.067810  228782 out.go:179] * [kubernetes-upgrade-750025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:26:39.069917  228782 notify.go:220] Checking for updates...
	I1016 18:26:39.069958  228782 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:26:39.072224  228782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:26:39.073930  228782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:26:39.075293  228782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:26:39.076631  228782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:26:39.078147  228782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:26:39.097698  223389 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1016 18:26:39.097820  223389 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:26:39.097941  223389 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:26:39.098022  223389 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:26:39.098085  223389 kubeadm.go:318] OS: Linux
	I1016 18:26:39.098156  223389 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:26:39.098224  223389 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:26:39.098299  223389 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:26:39.098365  223389 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:26:39.098433  223389 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:26:39.098502  223389 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:26:39.098573  223389 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:26:39.098627  223389 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:26:39.098743  223389 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:26:39.098882  223389 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:26:39.099019  223389 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1016 18:26:39.099116  223389 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:26:39.103864  223389 out.go:252]   - Generating certificates and keys ...
	I1016 18:26:39.103966  223389 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:26:39.104091  223389 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:26:39.104188  223389 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:26:39.104269  223389 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:26:39.104375  223389 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:26:39.104457  223389 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:26:39.104557  223389 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:26:39.104734  223389 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-956814] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:26:39.104810  223389 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:26:39.104995  223389 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-956814] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:26:39.105070  223389 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:26:39.105132  223389 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:26:39.105199  223389 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:26:39.105283  223389 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:26:39.105341  223389 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:26:39.105444  223389 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:26:39.105539  223389 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:26:39.105623  223389 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:26:39.105760  223389 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:26:39.105849  223389 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:26:39.107337  223389 out.go:252]   - Booting up control plane ...
	I1016 18:26:39.107447  223389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:26:39.107543  223389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:26:39.107640  223389 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:26:39.107820  223389 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:26:39.107901  223389 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:26:39.107995  223389 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:26:39.108144  223389 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1016 18:26:39.108214  223389 kubeadm.go:318] [apiclient] All control plane components are healthy after 4.502069 seconds
	I1016 18:26:39.108320  223389 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:26:39.108419  223389 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:26:39.108472  223389 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:26:39.108696  223389 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-956814 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:26:39.108763  223389 kubeadm.go:318] [bootstrap-token] Using token: f58gze.zbs2km0rvh75uifu
	I1016 18:26:39.110185  223389 out.go:252]   - Configuring RBAC rules ...
	I1016 18:26:39.110298  223389 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:26:39.110365  223389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:26:39.110490  223389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:26:39.110593  223389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:26:39.110690  223389 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:26:39.110833  223389 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:26:39.110985  223389 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:26:39.111023  223389 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:26:39.111061  223389 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:26:39.111067  223389 kubeadm.go:318] 
	I1016 18:26:39.111125  223389 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:26:39.111131  223389 kubeadm.go:318] 
	I1016 18:26:39.111190  223389 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:26:39.111195  223389 kubeadm.go:318] 
	I1016 18:26:39.111220  223389 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:26:39.111267  223389 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:26:39.111308  223389 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:26:39.111314  223389 kubeadm.go:318] 
	I1016 18:26:39.111361  223389 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:26:39.111366  223389 kubeadm.go:318] 
	I1016 18:26:39.111402  223389 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:26:39.111408  223389 kubeadm.go:318] 
	I1016 18:26:39.111448  223389 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:26:39.111525  223389 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:26:39.111631  223389 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:26:39.111638  223389 kubeadm.go:318] 
	I1016 18:26:39.111706  223389 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:26:39.111808  223389 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:26:39.111822  223389 kubeadm.go:318] 
	I1016 18:26:39.111897  223389 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token f58gze.zbs2km0rvh75uifu \
	I1016 18:26:39.112023  223389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:26:39.112074  223389 kubeadm.go:318] 	--control-plane 
	I1016 18:26:39.112083  223389 kubeadm.go:318] 
	I1016 18:26:39.112188  223389 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:26:39.112199  223389 kubeadm.go:318] 
	I1016 18:26:39.112304  223389 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token f58gze.zbs2km0rvh75uifu \
	I1016 18:26:39.112402  223389 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:26:39.112413  223389 cni.go:84] Creating CNI manager for ""
	I1016 18:26:39.112419  223389 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:26:39.114121  223389 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:26:39.080312  228782 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:26:39.080802  228782 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:26:39.109128  228782 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:26:39.109215  228782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:26:39.174581  228782 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-16 18:26:39.162506128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:26:39.174763  228782 docker.go:318] overlay module found
	I1016 18:26:39.177841  228782 out.go:179] * Using the docker driver based on existing profile
	I1016 18:26:39.179425  228782 start.go:305] selected driver: docker
	I1016 18:26:39.179445  228782 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-750025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-750025 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:26:39.179526  228782 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:26:39.180143  228782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:26:39.243253  228782 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-16 18:26:39.232703563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:26:39.243522  228782 cni.go:84] Creating CNI manager for ""
	I1016 18:26:39.243574  228782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:26:39.243607  228782 start.go:349] cluster config:
	{Name:kubernetes-upgrade-750025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-750025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:26:39.245886  228782 out.go:179] * Starting "kubernetes-upgrade-750025" primary control-plane node in "kubernetes-upgrade-750025" cluster
	I1016 18:26:39.247399  228782 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:26:39.248640  228782 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:26:39.249830  228782 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:26:39.249873  228782 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:26:39.249883  228782 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:26:39.250025  228782 cache.go:58] Caching tarball of preloaded images
	I1016 18:26:39.250163  228782 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:26:39.250192  228782 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:26:39.250345  228782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/config.json ...
	I1016 18:26:39.276566  228782 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:26:39.276588  228782 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:26:39.276605  228782 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:26:39.276634  228782 start.go:360] acquireMachinesLock for kubernetes-upgrade-750025: {Name:mk5ddbf045e9e9bdfaccb97ee87ca8adfe2d2710 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:26:39.276711  228782 start.go:364] duration metric: took 44.675µs to acquireMachinesLock for "kubernetes-upgrade-750025"
	I1016 18:26:39.276745  228782 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:26:39.276754  228782 fix.go:54] fixHost starting: 
	I1016 18:26:39.276975  228782 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-750025 --format={{.State.Status}}
	I1016 18:26:39.297498  228782 fix.go:112] recreateIfNeeded on kubernetes-upgrade-750025: state=Stopped err=<nil>
	W1016 18:26:39.297530  228782 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:26:35.955876  228034 delete.go:124] DEMOLISHING missing-upgrade-294813 ...
	I1016 18:26:35.955982  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:35.977230  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	W1016 18:26:35.977340  228034 stop.go:83] unable to get state: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:35.977370  228034 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:35.977922  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:35.999776  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:35.999857  228034 delete.go:82] Unable to get host status for missing-upgrade-294813, assuming it has already been deleted: state: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:35.999923  228034 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-294813
	W1016 18:26:36.020741  228034 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-294813 returned with exit code 1
	I1016 18:26:36.020780  228034 kic.go:371] could not find the container missing-upgrade-294813 to remove it. will try anyways
	I1016 18:26:36.020833  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:36.041197  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	W1016 18:26:36.041290  228034 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:36.041378  228034 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-294813 /bin/bash -c "sudo init 0"
	W1016 18:26:36.063131  228034 cli_runner.go:211] docker exec --privileged -t missing-upgrade-294813 /bin/bash -c "sudo init 0" returned with exit code 1
	I1016 18:26:36.063164  228034 oci.go:659] error shutdown missing-upgrade-294813: docker exec --privileged -t missing-upgrade-294813 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:37.063940  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:37.084885  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:37.084959  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:37.084984  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:37.085025  228034 retry.go:31] will retry after 495.098927ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:37.580332  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:37.599122  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:37.599185  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:37.599198  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:37.599232  228034 retry.go:31] will retry after 706.028386ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:38.305870  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:38.325514  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:38.325580  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:38.325595  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:38.325653  228034 retry.go:31] will retry after 1.619035282s: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:39.945883  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:39.968663  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:39.968765  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:39.968776  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:39.968810  228034 retry.go:31] will retry after 1.645342548s: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:39.115673  223389 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:26:39.120529  223389 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1016 18:26:39.120549  223389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:26:39.136747  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:26:39.893257  223389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:26:39.893360  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:39.893418  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-956814 minikube.k8s.io/updated_at=2025_10_16T18_26_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=old-k8s-version-956814 minikube.k8s.io/primary=true
	I1016 18:26:39.903935  223389 ops.go:34] apiserver oom_adj: -16
	I1016 18:26:39.986013  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:40.486903  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:40.986554  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:41.486522  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:41.986912  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:39.299709  228782 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-750025" ...
	I1016 18:26:39.299791  228782 cli_runner.go:164] Run: docker start kubernetes-upgrade-750025
	I1016 18:26:39.570879  228782 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-750025 --format={{.State.Status}}
	I1016 18:26:39.593128  228782 kic.go:430] container "kubernetes-upgrade-750025" state is running.
	I1016 18:26:39.593627  228782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-750025
	I1016 18:26:39.615395  228782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/config.json ...
	I1016 18:26:39.615629  228782 machine.go:93] provisionDockerMachine start ...
	I1016 18:26:39.615711  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:39.636493  228782 main.go:141] libmachine: Using SSH client type: native
	I1016 18:26:39.636846  228782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1016 18:26:39.636872  228782 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:26:39.637661  228782 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48506->127.0.0.1:33048: read: connection reset by peer
	I1016 18:26:42.775706  228782 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-750025
	
	I1016 18:26:42.775748  228782 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-750025"
	I1016 18:26:42.775808  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:42.794588  228782 main.go:141] libmachine: Using SSH client type: native
	I1016 18:26:42.795058  228782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1016 18:26:42.795082  228782 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-750025 && echo "kubernetes-upgrade-750025" | sudo tee /etc/hostname
	I1016 18:26:42.941941  228782 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-750025
	
	I1016 18:26:42.942023  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:42.960768  228782 main.go:141] libmachine: Using SSH client type: native
	I1016 18:26:42.961060  228782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1016 18:26:42.961095  228782 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-750025' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-750025/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-750025' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:26:43.100867  228782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:26:43.100897  228782 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:26:43.100966  228782 ubuntu.go:190] setting up certificates
	I1016 18:26:43.100979  228782 provision.go:84] configureAuth start
	I1016 18:26:43.101042  228782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-750025
	I1016 18:26:43.119540  228782 provision.go:143] copyHostCerts
	I1016 18:26:43.119600  228782 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:26:43.119617  228782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:26:43.119694  228782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:26:43.119840  228782 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:26:43.119855  228782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:26:43.119901  228782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:26:43.119984  228782 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:26:43.120000  228782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:26:43.120036  228782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:26:43.120114  228782 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-750025 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-750025 localhost minikube]
	I1016 18:26:43.352685  228782 provision.go:177] copyRemoteCerts
	I1016 18:26:43.352765  228782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:26:43.352812  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:43.371474  228782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kubernetes-upgrade-750025/id_rsa Username:docker}
	I1016 18:26:43.470656  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:26:43.489841  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1016 18:26:43.508463  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:26:43.527327  228782 provision.go:87] duration metric: took 426.333762ms to configureAuth
	I1016 18:26:43.527362  228782 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:26:43.527519  228782 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:26:43.527619  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:43.548585  228782 main.go:141] libmachine: Using SSH client type: native
	I1016 18:26:43.548860  228782 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I1016 18:26:43.548890  228782 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:26:43.814540  228782 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:26:43.814564  228782 machine.go:96] duration metric: took 4.198916244s to provisionDockerMachine
	I1016 18:26:43.814576  228782 start.go:293] postStartSetup for "kubernetes-upgrade-750025" (driver="docker")
	I1016 18:26:43.814585  228782 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:26:43.814629  228782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:26:43.814664  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:43.834614  228782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kubernetes-upgrade-750025/id_rsa Username:docker}
	I1016 18:26:43.934631  228782 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:26:43.938923  228782 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:26:43.938958  228782 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:26:43.938971  228782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:26:43.939050  228782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:26:43.939166  228782 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:26:43.939329  228782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:26:43.947816  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:26:43.967217  228782 start.go:296] duration metric: took 152.626954ms for postStartSetup
	I1016 18:26:43.967334  228782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:26:43.967394  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:43.987595  228782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kubernetes-upgrade-750025/id_rsa Username:docker}
	I1016 18:26:44.085241  228782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:26:44.090675  228782 fix.go:56] duration metric: took 4.813913013s for fixHost
	I1016 18:26:44.090704  228782 start.go:83] releasing machines lock for "kubernetes-upgrade-750025", held for 4.813969499s
	I1016 18:26:44.090791  228782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-750025
	I1016 18:26:44.109700  228782 ssh_runner.go:195] Run: cat /version.json
	I1016 18:26:44.109768  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:44.109798  228782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:26:44.110111  228782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-750025
	I1016 18:26:44.129631  228782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kubernetes-upgrade-750025/id_rsa Username:docker}
	I1016 18:26:44.130370  228782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kubernetes-upgrade-750025/id_rsa Username:docker}
	I1016 18:26:44.224274  228782 ssh_runner.go:195] Run: systemctl --version
	I1016 18:26:44.280779  228782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:26:44.316689  228782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:26:44.321857  228782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:26:44.321924  228782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:26:44.330521  228782 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:26:44.330549  228782 start.go:495] detecting cgroup driver to use...
	I1016 18:26:44.330586  228782 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:26:44.330681  228782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:26:44.345765  228782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:26:44.359210  228782 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:26:44.359269  228782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:26:44.374552  228782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:26:44.389113  228782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:26:44.474249  228782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:26:44.561616  228782 docker.go:234] disabling docker service ...
	I1016 18:26:44.561681  228782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:26:44.576641  228782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:26:44.589521  228782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:26:44.674836  228782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:26:44.758441  228782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:26:44.771312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:26:44.785586  228782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:26:44.785648  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.794749  228782 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:26:44.794811  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.804167  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.813635  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.822708  228782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:26:44.831691  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.840987  228782 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.849661  228782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:26:44.859289  228782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:26:44.867378  228782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:26:44.875196  228782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:26:44.962399  228782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:26:45.072963  228782 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:26:45.073028  228782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:26:45.077272  228782 start.go:563] Will wait 60s for crictl version
	I1016 18:26:45.077335  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:26:45.081196  228782 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:26:45.106566  228782 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:26:45.106647  228782 ssh_runner.go:195] Run: crio --version
	I1016 18:26:45.135590  228782 ssh_runner.go:195] Run: crio --version
	I1016 18:26:45.166133  228782 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:26:41.614927  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:41.632509  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:41.632638  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:41.632677  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:41.632759  228034 retry.go:31] will retry after 3.087649412s: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:44.722860  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:44.741136  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:44.741208  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:44.741229  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:44.741267  228034 retry.go:31] will retry after 4.811093194s: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:42.486676  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:42.987003  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:43.486781  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:43.986286  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:44.486811  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:44.986928  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:45.486217  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:45.987026  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:46.486697  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:46.986362  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:45.167740  228782 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-750025 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:26:45.186073  228782 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 18:26:45.190452  228782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:26:45.201841  228782 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-750025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-750025 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:26:45.201964  228782 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:26:45.202019  228782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:26:45.237042  228782 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1016 18:26:45.237104  228782 ssh_runner.go:195] Run: which lz4
	I1016 18:26:45.241550  228782 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1016 18:26:45.245710  228782 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1016 18:26:45.245762  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1016 18:26:46.145634  228782 crio.go:462] duration metric: took 904.119489ms to copy over tarball
	I1016 18:26:46.145704  228782 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1016 18:26:48.380551  228782 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.234809761s)
	I1016 18:26:48.380580  228782 crio.go:469] duration metric: took 2.234914451s to extract the tarball
	I1016 18:26:48.380589  228782 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1016 18:26:48.481228  228782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:26:48.516352  228782 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:26:48.516375  228782 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:26:48.516387  228782 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 18:26:48.516501  228782 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-750025 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-750025 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:26:48.516571  228782 ssh_runner.go:195] Run: crio config
	I1016 18:26:48.570270  228782 cni.go:84] Creating CNI manager for ""
	I1016 18:26:48.570298  228782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:26:48.570315  228782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:26:48.570347  228782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-750025 NodeName:kubernetes-upgrade-750025 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:26:48.570505  228782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-750025"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:26:48.570569  228782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:26:48.579051  228782 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:26:48.579121  228782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:26:48.587436  228782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1016 18:26:48.600487  228782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:26:48.613494  228782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1016 18:26:48.626564  228782 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:26:48.630560  228782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:26:48.641483  228782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:26:48.725474  228782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:26:48.751309  228782 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025 for IP: 192.168.76.2
	I1016 18:26:48.751335  228782 certs.go:195] generating shared ca certs ...
	I1016 18:26:48.751352  228782 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:26:48.751501  228782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:26:48.751553  228782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:26:48.751568  228782 certs.go:257] generating profile certs ...
	I1016 18:26:48.751676  228782 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/client.key
	I1016 18:26:48.751755  228782 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/apiserver.key.06f5db80
	I1016 18:26:48.751793  228782 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/proxy-client.key
	I1016 18:26:48.751899  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:26:48.751925  228782 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:26:48.751935  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:26:48.751959  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:26:48.751980  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:26:48.752078  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:26:48.752128  228782 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:26:48.752761  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:26:48.772626  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:26:48.792052  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:26:48.812287  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:26:48.836179  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1016 18:26:48.855477  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:26:48.873837  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:26:48.892486  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:26:48.911230  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:26:48.929802  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:26:48.947465  228782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:26:48.966249  228782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:26:48.978969  228782 ssh_runner.go:195] Run: openssl version
	I1016 18:26:48.985217  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:26:48.995413  228782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:26:49.000541  228782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:26:49.000601  228782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:26:49.039069  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:26:49.048763  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:26:49.057895  228782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:26:49.555843  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:49.575179  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:49.575248  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:49.575316  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:49.575394  228034 retry.go:31] will retry after 5.776523801s: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:47.486104  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:47.986071  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:48.486879  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:48.986927  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:49.486931  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:49.986089  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:50.486620  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:50.986314  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:51.486283  223389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:26:51.575191  223389 kubeadm.go:1113] duration metric: took 11.681872437s to wait for elevateKubeSystemPrivileges
	I1016 18:26:51.575231  223389 kubeadm.go:402] duration metric: took 21.888139295s to StartCluster
	I1016 18:26:51.575276  223389 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:26:51.575354  223389 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:26:51.577023  223389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:26:51.577302  223389 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:26:51.577325  223389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:26:51.577356  223389 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:26:51.577461  223389 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-956814"
	I1016 18:26:51.577487  223389 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-956814"
	I1016 18:26:51.577485  223389 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-956814"
	I1016 18:26:51.577517  223389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-956814"
	I1016 18:26:51.577519  223389 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:26:51.577553  223389 config.go:182] Loaded profile config "old-k8s-version-956814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:26:51.577971  223389 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:26:51.578085  223389 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:26:51.579157  223389 out.go:179] * Verifying Kubernetes components...
	I1016 18:26:51.581126  223389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:26:51.603417  223389 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-956814"
	I1016 18:26:51.603465  223389 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:26:51.603952  223389 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:26:51.604220  223389 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:26:51.605465  223389 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:26:51.605486  223389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:26:51.605537  223389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:26:51.632015  223389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:26:51.632519  223389 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:26:51.632543  223389 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:26:51.632581  223389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:26:51.655981  223389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33043 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:26:51.688221  223389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:26:51.726859  223389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:26:51.764094  223389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:26:51.790153  223389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:26:51.937448  223389 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1016 18:26:51.940386  223389 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-956814" to be "Ready" ...
	I1016 18:26:52.243232  223389 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:26:49.062016  228782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:26:49.062134  228782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:26:49.098010  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:26:49.106731  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:26:49.115825  228782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:26:49.119914  228782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:26:49.119978  228782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:26:49.155986  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:26:49.164484  228782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:26:49.168521  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:26:49.204154  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:26:49.239226  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:26:49.277402  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:26:49.315528  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:26:49.351568  228782 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:26:49.386771  228782 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-750025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-750025 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:26:49.386863  228782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:26:49.386941  228782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:26:49.418037  228782 cri.go:89] found id: ""
	I1016 18:26:49.418105  228782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:26:49.427345  228782 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:26:49.427367  228782 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:26:49.427420  228782 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:26:49.435357  228782 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:26:49.436008  228782 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-750025" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:26:49.436424  228782 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-750025" cluster setting kubeconfig missing "kubernetes-upgrade-750025" context setting]
	I1016 18:26:49.437388  228782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:26:49.469404  228782 kapi.go:59] client config for kubernetes-upgrade-750025: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kubernetes-upgrade-750025/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:26:49.469865  228782 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:26:49.469880  228782 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:26:49.469885  228782 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:26:49.469889  228782 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:26:49.469893  228782 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:26:49.470222  228782 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:26:49.479012  228782 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-16 18:26:27.279656666 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-16 18:26:48.624133840 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-750025"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.34.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1016 18:26:49.479038  228782 kubeadm.go:1160] stopping kube-system containers ...
	I1016 18:26:49.479051  228782 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1016 18:26:49.479096  228782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:26:49.510324  228782 cri.go:89] found id: ""
	I1016 18:26:49.510385  228782 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:26:49.545413  228782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:26:49.555308  228782 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 16 18:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 16 18:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 16 18:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 16 18:26 /etc/kubernetes/scheduler.conf
	
	I1016 18:26:49.555375  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:26:49.564465  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:26:49.573698  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:26:49.582671  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:26:49.582741  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:26:49.590958  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:26:49.599316  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:26:49.599403  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:26:49.608254  228782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:26:49.674948  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:26:49.718262  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:26:51.276484  228782 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.558184729s)
	I1016 18:26:51.276559  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:26:51.460521  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:26:51.522666  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:26:51.576614  228782 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:26:51.576686  228782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:26:52.077807  228782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:26:52.095652  228782 api_server.go:72] duration metric: took 519.042289ms to wait for apiserver process to appear ...
	I1016 18:26:52.095687  228782 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:26:52.095710  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:26:52.096138  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:26:52.595832  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:26:55.356766  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:26:55.375232  228034 cli_runner.go:211] docker container inspect missing-upgrade-294813 --format={{.State.Status}} returned with exit code 1
	I1016 18:26:55.375303  228034 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	I1016 18:26:55.375317  228034 oci.go:673] temporary error: container missing-upgrade-294813 status is  but expect it to be exited
	I1016 18:26:55.375363  228034 oci.go:88] couldn't shut down missing-upgrade-294813 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-294813": docker container inspect missing-upgrade-294813 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-294813
	 
	I1016 18:26:55.375414  228034 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-294813
	I1016 18:26:55.392805  228034 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-294813
	W1016 18:26:55.410642  228034 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-294813 returned with exit code 1
	I1016 18:26:55.410803  228034 cli_runner.go:164] Run: docker network inspect missing-upgrade-294813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:26:55.428781  228034 cli_runner.go:164] Run: docker network rm missing-upgrade-294813
	I1016 18:26:55.591448  228034 fix.go:124] Sleeping 1 second for extra luck!
	I1016 18:26:56.592507  228034 start.go:125] createHost starting for "" (driver="docker")
	I1016 18:26:52.244874  223389 addons.go:514] duration metric: took 667.517886ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:26:52.442162  223389 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-956814" context rescaled to 1 replicas
	W1016 18:26:53.945004  223389 node_ready.go:57] node "old-k8s-version-956814" has "Ready":"False" status (will retry)
	W1016 18:26:56.444545  223389 node_ready.go:57] node "old-k8s-version-956814" has "Ready":"False" status (will retry)
	I1016 18:26:57.598816  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:26:57.598855  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:26:56.594260  228034 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:26:56.594386  228034 start.go:159] libmachine.API.Create for "missing-upgrade-294813" (driver="docker")
	I1016 18:26:56.594416  228034 client.go:168] LocalClient.Create starting
	I1016 18:26:56.594499  228034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:26:56.594530  228034 main.go:141] libmachine: Decoding PEM data...
	I1016 18:26:56.594544  228034 main.go:141] libmachine: Parsing certificate...
	I1016 18:26:56.594602  228034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:26:56.594621  228034 main.go:141] libmachine: Decoding PEM data...
	I1016 18:26:56.594631  228034 main.go:141] libmachine: Parsing certificate...
	I1016 18:26:56.594882  228034 cli_runner.go:164] Run: docker network inspect missing-upgrade-294813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:26:56.612478  228034 cli_runner.go:211] docker network inspect missing-upgrade-294813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:26:56.612563  228034 network_create.go:284] running [docker network inspect missing-upgrade-294813] to gather additional debugging logs...
	I1016 18:26:56.612589  228034 cli_runner.go:164] Run: docker network inspect missing-upgrade-294813
	W1016 18:26:56.628066  228034 cli_runner.go:211] docker network inspect missing-upgrade-294813 returned with exit code 1
	I1016 18:26:56.628102  228034 network_create.go:287] error running [docker network inspect missing-upgrade-294813]: docker network inspect missing-upgrade-294813: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-294813 not found
	I1016 18:26:56.628118  228034 network_create.go:289] output of [docker network inspect missing-upgrade-294813]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-294813 not found
	
	** /stderr **
	I1016 18:26:56.628210  228034 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:26:56.646015  228034 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:26:56.646701  228034 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:26:56.647370  228034 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:26:56.647883  228034 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07ac2eb0982 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:2a:d5:21:5c:9c} reservation:<nil>}
	I1016 18:26:56.648557  228034 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-33335e6a6c4d IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:04:2f:a5:66:a2} reservation:<nil>}
	I1016 18:26:56.649370  228034 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002022100}
	I1016 18:26:56.649402  228034 network_create.go:124] attempt to create docker network missing-upgrade-294813 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1016 18:26:56.649454  228034 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-294813 missing-upgrade-294813
	I1016 18:26:56.711575  228034 network_create.go:108] docker network missing-upgrade-294813 192.168.94.0/24 created
	I1016 18:26:56.711606  228034 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-294813" container
	I1016 18:26:56.711672  228034 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:26:56.730245  228034 cli_runner.go:164] Run: docker volume create missing-upgrade-294813 --label name.minikube.sigs.k8s.io=missing-upgrade-294813 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:26:56.747570  228034 oci.go:103] Successfully created a docker volume missing-upgrade-294813
	I1016 18:26:56.747662  228034 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-294813-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-294813 --entrypoint /usr/bin/test -v missing-upgrade-294813:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1016 18:26:57.264467  228034 oci.go:107] Successfully prepared a docker volume missing-upgrade-294813
	I1016 18:26:57.264508  228034 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1016 18:26:57.264526  228034 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:26:57.264602  228034 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-294813:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	W1016 18:26:58.943551  223389 node_ready.go:57] node "old-k8s-version-956814" has "Ready":"False" status (will retry)
	W1016 18:27:01.443889  223389 node_ready.go:57] node "old-k8s-version-956814" has "Ready":"False" status (will retry)
	I1016 18:27:02.600838  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:27:02.600887  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:02.500699  228034 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-294813:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.236041372s)
	I1016 18:27:02.500753  228034 kic.go:203] duration metric: took 5.236222119s to extract preloaded images to volume ...
	W1016 18:27:02.500847  228034 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:27:02.500877  228034 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:27:02.500919  228034 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:27:02.562556  228034 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-294813 --name missing-upgrade-294813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-294813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-294813 --network missing-upgrade-294813 --ip 192.168.94.2 --volume missing-upgrade-294813:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1016 18:27:02.858039  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Running}}
	I1016 18:27:02.877864  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:02.898576  228034 cli_runner.go:164] Run: docker exec missing-upgrade-294813 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:27:02.953766  228034 oci.go:144] the created container "missing-upgrade-294813" has a running status.
	I1016 18:27:02.953796  228034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa...
	I1016 18:27:03.100175  228034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:27:03.127502  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:03.152679  228034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:27:03.152703  228034 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-294813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:27:03.208621  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:03.233041  228034 machine.go:93] provisionDockerMachine start ...
	I1016 18:27:03.233159  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:03.254956  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:03.255310  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:03.255336  228034 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:27:03.378512  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-294813
	
	I1016 18:27:03.378541  228034 ubuntu.go:182] provisioning hostname "missing-upgrade-294813"
	I1016 18:27:03.378600  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:03.398685  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:03.398922  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:03.398938  228034 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-294813 && echo "missing-upgrade-294813" | sudo tee /etc/hostname
	I1016 18:27:03.533690  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-294813
	
	I1016 18:27:03.533783  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:03.554672  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:03.554925  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:03.554946  228034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-294813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-294813/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-294813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:27:03.673848  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:27:03.673930  228034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:27:03.673969  228034 ubuntu.go:190] setting up certificates
	I1016 18:27:03.673986  228034 provision.go:84] configureAuth start
	I1016 18:27:03.674055  228034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-294813
	I1016 18:27:03.693580  228034 provision.go:143] copyHostCerts
	I1016 18:27:03.693641  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:27:03.693653  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:27:03.693737  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:27:03.693836  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:27:03.693845  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:27:03.693877  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:27:03.693933  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:27:03.693940  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:27:03.693963  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:27:03.694017  228034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-294813 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-294813]
	I1016 18:27:03.852366  228034 provision.go:177] copyRemoteCerts
	I1016 18:27:03.852436  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:27:03.852484  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:03.871789  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:03.959819  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 18:27:03.988019  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:27:04.013907  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:27:04.043281  228034 provision.go:87] duration metric: took 369.276232ms to configureAuth
	I1016 18:27:04.043315  228034 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:27:04.043489  228034 config.go:182] Loaded profile config "missing-upgrade-294813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:27:04.043578  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.062316  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:04.062580  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:04.062606  228034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:27:04.318430  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:27:04.318457  228034 machine.go:96] duration metric: took 1.085391689s to provisionDockerMachine
	I1016 18:27:04.318471  228034 client.go:171] duration metric: took 7.724049857s to LocalClient.Create
	I1016 18:27:04.318492  228034 start.go:167] duration metric: took 7.724105332s to libmachine.API.Create "missing-upgrade-294813"
	I1016 18:27:04.318502  228034 start.go:293] postStartSetup for "missing-upgrade-294813" (driver="docker")
	I1016 18:27:04.318524  228034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:27:04.318586  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:27:04.318633  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.337461  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:04.426484  228034 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:27:04.430469  228034 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:27:04.430509  228034 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1016 18:27:04.430525  228034 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1016 18:27:04.430534  228034 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1016 18:27:04.430547  228034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:27:04.430681  228034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:27:04.430826  228034 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:27:04.431031  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:27:04.440987  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:27:04.470285  228034 start.go:296] duration metric: took 151.76728ms for postStartSetup
	I1016 18:27:04.470673  228034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-294813
	I1016 18:27:04.490132  228034 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/config.json ...
	I1016 18:27:04.490387  228034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:27:04.490432  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.509122  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:04.591903  228034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:27:04.596483  228034 start.go:128] duration metric: took 8.003936342s to createHost
	I1016 18:27:04.596565  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	W1016 18:27:04.615538  228034 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:27:04.615565  228034 machine.go:93] provisionDockerMachine start ...
	I1016 18:27:04.615637  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.633678  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:04.633946  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:04.633963  228034 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:27:04.750618  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-294813
	
	I1016 18:27:04.750643  228034 ubuntu.go:182] provisioning hostname "missing-upgrade-294813"
	I1016 18:27:04.750691  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.769326  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:04.769524  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:04.769537  228034 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-294813 && echo "missing-upgrade-294813" | sudo tee /etc/hostname
	I1016 18:27:04.898513  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-294813
	
	I1016 18:27:04.898624  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:04.919368  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:04.919577  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:04.919594  228034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-294813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-294813/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-294813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:27:05.036607  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:27:05.036644  228034 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:27:05.036674  228034 ubuntu.go:190] setting up certificates
	I1016 18:27:05.036695  228034 provision.go:84] configureAuth start
	I1016 18:27:05.036793  228034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-294813
	I1016 18:27:05.056537  228034 provision.go:143] copyHostCerts
	I1016 18:27:05.056590  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:27:05.056598  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:27:05.056656  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:27:05.056768  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:27:05.056780  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:27:05.056805  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:27:05.056866  228034 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:27:05.056873  228034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:27:05.056895  228034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:27:05.056947  228034 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-294813 san=[127.0.0.1 192.168.94.2 localhost minikube missing-upgrade-294813]
	I1016 18:27:05.188996  228034 provision.go:177] copyRemoteCerts
	I1016 18:27:05.189055  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:27:05.189089  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.208045  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:05.298909  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:27:05.326365  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 18:27:05.353309  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:27:05.379122  228034 provision.go:87] duration metric: took 342.412232ms to configureAuth
	I1016 18:27:05.379154  228034 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:27:05.379304  228034 config.go:182] Loaded profile config "missing-upgrade-294813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:27:05.379404  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.398156  228034 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:05.398374  228034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1016 18:27:05.398391  228034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1016 18:27:03.943571  223389 node_ready.go:57] node "old-k8s-version-956814" has "Ready":"False" status (will retry)
	I1016 18:27:05.443778  223389 node_ready.go:49] node "old-k8s-version-956814" is "Ready"
	I1016 18:27:05.443807  223389 node_ready.go:38] duration metric: took 13.503382506s for node "old-k8s-version-956814" to be "Ready" ...
	I1016 18:27:05.443822  223389 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:27:05.443879  223389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:05.457021  223389 api_server.go:72] duration metric: took 13.879680878s to wait for apiserver process to appear ...
	I1016 18:27:05.457050  223389 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:27:05.457068  223389 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:27:05.461487  223389 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:27:05.462688  223389 api_server.go:141] control plane version: v1.28.0
	I1016 18:27:05.462710  223389 api_server.go:131] duration metric: took 5.654033ms to wait for apiserver health ...
	I1016 18:27:05.462734  223389 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:27:05.466608  223389 system_pods.go:59] 8 kube-system pods found
	I1016 18:27:05.466640  223389 system_pods.go:61] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:05.466645  223389 system_pods.go:61] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running
	I1016 18:27:05.466652  223389 system_pods.go:61] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:05.466655  223389 system_pods.go:61] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running
	I1016 18:27:05.466660  223389 system_pods.go:61] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running
	I1016 18:27:05.466663  223389 system_pods.go:61] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:05.466666  223389 system_pods.go:61] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running
	I1016 18:27:05.466671  223389 system_pods.go:61] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:27:05.466676  223389 system_pods.go:74] duration metric: took 3.93656ms to wait for pod list to return data ...
	I1016 18:27:05.466684  223389 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:27:05.468875  223389 default_sa.go:45] found service account: "default"
	I1016 18:27:05.468900  223389 default_sa.go:55] duration metric: took 2.210185ms for default service account to be created ...
	I1016 18:27:05.468911  223389 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:27:05.472427  223389 system_pods.go:86] 8 kube-system pods found
	I1016 18:27:05.472460  223389 system_pods.go:89] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:05.472468  223389 system_pods.go:89] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running
	I1016 18:27:05.472473  223389 system_pods.go:89] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:05.472481  223389 system_pods.go:89] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running
	I1016 18:27:05.472485  223389 system_pods.go:89] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running
	I1016 18:27:05.472488  223389 system_pods.go:89] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:05.472491  223389 system_pods.go:89] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running
	I1016 18:27:05.472496  223389 system_pods.go:89] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:27:05.472515  223389 retry.go:31] will retry after 197.08121ms: missing components: kube-dns
	I1016 18:27:05.674508  223389 system_pods.go:86] 8 kube-system pods found
	I1016 18:27:05.674548  223389 system_pods.go:89] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:05.674557  223389 system_pods.go:89] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running
	I1016 18:27:05.674568  223389 system_pods.go:89] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:05.674574  223389 system_pods.go:89] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running
	I1016 18:27:05.674581  223389 system_pods.go:89] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running
	I1016 18:27:05.674586  223389 system_pods.go:89] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:05.674591  223389 system_pods.go:89] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running
	I1016 18:27:05.674597  223389 system_pods.go:89] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:27:05.674612  223389 retry.go:31] will retry after 363.347648ms: missing components: kube-dns
	I1016 18:27:06.044058  223389 system_pods.go:86] 8 kube-system pods found
	I1016 18:27:06.044097  223389 system_pods.go:89] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:06.044105  223389 system_pods.go:89] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running
	I1016 18:27:06.044113  223389 system_pods.go:89] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:06.044118  223389 system_pods.go:89] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running
	I1016 18:27:06.044124  223389 system_pods.go:89] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running
	I1016 18:27:06.044129  223389 system_pods.go:89] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:06.044135  223389 system_pods.go:89] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running
	I1016 18:27:06.044150  223389 system_pods.go:89] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:27:06.044168  223389 retry.go:31] will retry after 420.265277ms: missing components: kube-dns
	I1016 18:27:06.469341  223389 system_pods.go:86] 8 kube-system pods found
	I1016 18:27:06.469374  223389 system_pods.go:89] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Running
	I1016 18:27:06.469383  223389 system_pods.go:89] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running
	I1016 18:27:06.469389  223389 system_pods.go:89] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:06.469396  223389 system_pods.go:89] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running
	I1016 18:27:06.469408  223389 system_pods.go:89] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running
	I1016 18:27:06.469417  223389 system_pods.go:89] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:06.469420  223389 system_pods.go:89] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running
	I1016 18:27:06.469423  223389 system_pods.go:89] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Running
	I1016 18:27:06.469431  223389 system_pods.go:126] duration metric: took 1.000514088s to wait for k8s-apps to be running ...
	I1016 18:27:06.469438  223389 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:27:06.469488  223389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:27:06.483446  223389 system_svc.go:56] duration metric: took 13.995192ms WaitForService to wait for kubelet
	I1016 18:27:06.483488  223389 kubeadm.go:586] duration metric: took 14.906152927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:27:06.483511  223389 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:27:06.486441  223389 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:27:06.486466  223389 node_conditions.go:123] node cpu capacity is 8
	I1016 18:27:06.486480  223389 node_conditions.go:105] duration metric: took 2.96305ms to run NodePressure ...
	I1016 18:27:06.486490  223389 start.go:241] waiting for startup goroutines ...
	I1016 18:27:06.486497  223389 start.go:246] waiting for cluster config update ...
	I1016 18:27:06.486506  223389 start.go:255] writing updated cluster config ...
	I1016 18:27:06.486771  223389 ssh_runner.go:195] Run: rm -f paused
	I1016 18:27:06.490786  223389 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:27:06.495829  223389 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kdcm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.500820  223389 pod_ready.go:94] pod "coredns-5dd5756b68-kdcm7" is "Ready"
	I1016 18:27:06.500843  223389 pod_ready.go:86] duration metric: took 4.988778ms for pod "coredns-5dd5756b68-kdcm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.503483  223389 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.508239  223389 pod_ready.go:94] pod "etcd-old-k8s-version-956814" is "Ready"
	I1016 18:27:06.508276  223389 pod_ready.go:86] duration metric: took 4.771751ms for pod "etcd-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.511821  223389 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.517438  223389 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-956814" is "Ready"
	I1016 18:27:06.517486  223389 pod_ready.go:86] duration metric: took 5.630017ms for pod "kube-apiserver-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.520796  223389 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:06.895169  223389 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-956814" is "Ready"
	I1016 18:27:06.895193  223389 pod_ready.go:86] duration metric: took 374.37731ms for pod "kube-controller-manager-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:07.096283  223389 pod_ready.go:83] waiting for pod "kube-proxy-nkwcm" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:05.655671  228034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:27:05.655702  228034 machine.go:96] duration metric: took 1.040129205s to provisionDockerMachine
	I1016 18:27:05.655730  228034 start.go:293] postStartSetup for "missing-upgrade-294813" (driver="docker")
	I1016 18:27:05.655744  228034 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:27:05.655917  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:27:05.655983  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.680062  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:05.770240  228034 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:27:05.774398  228034 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:27:05.774443  228034 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1016 18:27:05.774456  228034 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1016 18:27:05.774463  228034 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1016 18:27:05.774478  228034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:27:05.774546  228034 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:27:05.774643  228034 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:27:05.774799  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:27:05.785345  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:27:05.812441  228034 start.go:296] duration metric: took 156.693823ms for postStartSetup
	I1016 18:27:05.812518  228034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:27:05.812569  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.832039  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:05.917541  228034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:27:05.922694  228034 fix.go:56] duration metric: took 29.998476013s for fixHost
	I1016 18:27:05.922769  228034 start.go:83] releasing machines lock for "missing-upgrade-294813", held for 29.99859306s
	I1016 18:27:05.922856  228034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-294813
	I1016 18:27:05.942128  228034 ssh_runner.go:195] Run: cat /version.json
	I1016 18:27:05.942189  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.942200  228034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:27:05.942334  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:05.962865  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:05.963239  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	W1016 18:27:06.046959  228034 out.go:285] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.32.0 -> Actual minikube version: v1.37.0
	I1016 18:27:06.047058  228034 ssh_runner.go:195] Run: systemctl --version
	I1016 18:27:06.144668  228034 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:27:06.287068  228034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1016 18:27:06.292466  228034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:27:06.316905  228034 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1016 18:27:06.316995  228034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:27:06.349889  228034 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1016 18:27:06.349909  228034 start.go:495] detecting cgroup driver to use...
	I1016 18:27:06.349941  228034 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:27:06.349987  228034 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:27:06.365804  228034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:27:06.378002  228034 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:27:06.378086  228034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:27:06.393339  228034 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:27:06.408390  228034 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:27:06.475988  228034 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:27:06.555628  228034 docker.go:234] disabling docker service ...
	I1016 18:27:06.555695  228034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:27:06.574750  228034 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:27:06.588053  228034 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:27:06.657551  228034 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:27:06.842906  228034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:27:06.855617  228034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:27:06.873879  228034 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1016 18:27:06.873946  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.888049  228034 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:27:06.888123  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.899424  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.910829  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.922147  228034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:27:06.932695  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.943854  228034 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.961378  228034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:06.972509  228034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:27:06.981946  228034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:27:06.991114  228034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:07.058257  228034 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:27:07.151641  228034 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:27:07.151741  228034 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:27:07.155931  228034 start.go:563] Will wait 60s for crictl version
	I1016 18:27:07.155980  228034 ssh_runner.go:195] Run: which crictl
	I1016 18:27:07.159668  228034 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1016 18:27:07.196914  228034 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1016 18:27:07.196998  228034 ssh_runner.go:195] Run: crio --version
	I1016 18:27:07.233766  228034 ssh_runner.go:195] Run: crio --version
	I1016 18:27:07.273195  228034 out.go:179] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1016 18:27:07.495811  223389 pod_ready.go:94] pod "kube-proxy-nkwcm" is "Ready"
	I1016 18:27:07.495839  223389 pod_ready.go:86] duration metric: took 399.531592ms for pod "kube-proxy-nkwcm" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:07.695902  223389 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:08.095424  223389 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-956814" is "Ready"
	I1016 18:27:08.095454  223389 pod_ready.go:86] duration metric: took 399.524453ms for pod "kube-scheduler-old-k8s-version-956814" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:27:08.095468  223389 pod_ready.go:40] duration metric: took 1.604648795s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:27:08.142141  223389 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1016 18:27:08.144137  223389 out.go:203] 
	W1016 18:27:08.145559  223389 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1016 18:27:08.146754  223389 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1016 18:27:08.148642  223389 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-956814" cluster and "default" namespace by default
	I1016 18:27:07.603804  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:27:07.603859  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:07.274639  228034 cli_runner.go:164] Run: docker network inspect missing-upgrade-294813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:27:07.292057  228034 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1016 18:27:07.296240  228034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:27:07.308506  228034 kubeadm.go:883] updating cluster {Name:missing-upgrade-294813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-294813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:27:07.308631  228034 preload.go:183] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1016 18:27:07.308686  228034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:27:07.367436  228034 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:27:07.367458  228034 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:27:07.367507  228034 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:27:07.402095  228034 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:27:07.402116  228034 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:27:07.402123  228034 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.28.3 crio true true} ...
	I1016 18:27:07.402229  228034 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=missing-upgrade-294813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-294813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:27:07.402295  228034 ssh_runner.go:195] Run: crio config
	I1016 18:27:07.448127  228034 cni.go:84] Creating CNI manager for ""
	I1016 18:27:07.448153  228034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:27:07.448172  228034 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:27:07.448200  228034 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-294813 NodeName:missing-upgrade-294813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:27:07.448389  228034 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-294813"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:27:07.448465  228034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1016 18:27:07.458992  228034 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:27:07.459076  228034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:27:07.468758  228034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1016 18:27:07.487999  228034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:27:07.510678  228034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1016 18:27:07.529612  228034 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:27:07.533429  228034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:27:07.545229  228034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:07.608788  228034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:27:07.629258  228034 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813 for IP: 192.168.94.2
	I1016 18:27:07.629283  228034 certs.go:195] generating shared ca certs ...
	I1016 18:27:07.629298  228034 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:07.629441  228034 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:27:07.629497  228034 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:27:07.629511  228034 certs.go:257] generating profile certs ...
	I1016 18:27:07.629610  228034 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/client.key
	I1016 18:27:07.629643  228034 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key.ae2c7ce4
	I1016 18:27:07.629783  228034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt.ae2c7ce4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1016 18:27:08.057444  228034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt.ae2c7ce4 ...
	I1016 18:27:08.057469  228034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt.ae2c7ce4: {Name:mkba6cdd3e7eeda0ab697b681f06b2884e8276ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:08.057653  228034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key.ae2c7ce4 ...
	I1016 18:27:08.057671  228034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key.ae2c7ce4: {Name:mk89bf4900c691b8d4bd2d76f3902fc30ecb1cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:08.057796  228034 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt.ae2c7ce4 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt
	I1016 18:27:08.057963  228034 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key.ae2c7ce4 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key
	I1016 18:27:08.058156  228034 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/proxy-client.key
	I1016 18:27:08.058311  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:27:08.058357  228034 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:27:08.058370  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:27:08.058398  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:27:08.058430  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:27:08.058461  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:27:08.058522  228034 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:27:08.059133  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:27:08.085811  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:27:08.114227  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:27:08.142082  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:27:08.170810  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1016 18:27:08.200116  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:27:08.228345  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:27:08.256138  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:27:08.284842  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:27:08.315560  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:27:08.349777  228034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:27:08.380749  228034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:27:08.400304  228034 ssh_runner.go:195] Run: openssl version
	I1016 18:27:08.406045  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:27:08.416079  228034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:27:08.419862  228034 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:27:08.419909  228034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:27:08.427306  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:27:08.437804  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:27:08.449263  228034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:27:08.453232  228034 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:27:08.453297  228034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:27:08.460622  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:27:08.471222  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:27:08.481874  228034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:08.485915  228034 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:08.485971  228034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:08.493138  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:27:08.504313  228034 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:27:08.508221  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:27:08.515311  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:27:08.522193  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:27:08.529330  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:27:08.536757  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:27:08.544004  228034 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:27:08.551880  228034 kubeadm.go:400] StartCluster: {Name:missing-upgrade-294813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-294813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:27:08.551990  228034 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:27:08.552067  228034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:27:08.589329  228034 cri.go:89] found id: ""
	I1016 18:27:08.589420  228034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W1016 18:27:08.599063  228034 kubeadm.go:413] apiserver tunnel failed: apiserver port not set
	I1016 18:27:08.599092  228034 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:27:08.599098  228034 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:27:08.599146  228034 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:27:08.608257  228034 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:27:08.609134  228034 kubeconfig.go:125] found "missing-upgrade-294813" server: "https://192.168.94.2:8443"
	I1016 18:27:08.610316  228034 kapi.go:59] client config for missing-upgrade-294813: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:27:08.610758  228034 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1016 18:27:08.610778  228034 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1016 18:27:08.610783  228034 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1016 18:27:08.610787  228034 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1016 18:27:08.610791  228034 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1016 18:27:08.611114  228034 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:27:08.620538  228034 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-16 18:26:10.927524993 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-16 18:27:07.527442070 +0000
	@@ -50,6 +50,7 @@
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	 cgroupDriver: systemd
	+containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	 hairpinMode: hairpin-veth
	 runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	
	-- /stdout --
	I1016 18:27:08.620556  228034 kubeadm.go:1160] stopping kube-system containers ...
	I1016 18:27:08.620569  228034 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1016 18:27:08.620613  228034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:27:08.656553  228034 cri.go:89] found id: ""
	I1016 18:27:08.656617  228034 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1016 18:27:08.670630  228034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:27:08.682454  228034 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:27:08.682476  228034 kubeadm.go:157] found existing configuration files:
	
	I1016 18:27:08.682513  228034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I1016 18:27:08.692424  228034 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:27:08.692474  228034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:27:08.701354  228034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I1016 18:27:08.710812  228034 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:27:08.710873  228034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:27:08.719864  228034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I1016 18:27:08.729838  228034 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:27:08.729898  228034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:27:08.738754  228034 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I1016 18:27:08.747851  228034 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:27:08.747912  228034 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:27:08.757165  228034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:27:08.766554  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:08.820476  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:09.656381  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:09.797223  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:09.858188  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:09.917470  228034 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:27:09.917558  228034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:10.418898  228034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:12.605796  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:27:12.605839  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:12.979108  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:59850->192.168.76.2:8443: read: connection reset by peer
	I1016 18:27:13.096450  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:13.096870  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:13.596122  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:13.596547  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:10.917930  228034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:10.930174  228034 api_server.go:72] duration metric: took 1.012714038s to wait for apiserver process to appear ...
	I1016 18:27:10.930198  228034 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:27:10.930223  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:13.168868  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:27:13.168980  228034 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:27:13.169002  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:13.196872  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:27:13.196986  228034 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:27:13.430330  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:13.434697  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:27:13.434756  228034 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:27:13.930385  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:13.934857  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:27:13.934890  228034 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:27:14.430506  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:14.434813  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:27:14.441876  228034 api_server.go:141] control plane version: v1.28.3
	I1016 18:27:14.441909  228034 api_server.go:131] duration metric: took 3.511704713s to wait for apiserver health ...
	I1016 18:27:14.441922  228034 cni.go:84] Creating CNI manager for ""
	I1016 18:27:14.441929  228034 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:27:14.444001  228034 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:27:14.445622  228034 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:27:14.450196  228034 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1016 18:27:14.450229  228034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:27:14.472596  228034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:27:15.114436  228034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:27:15.119036  228034 system_pods.go:59] 5 kube-system pods found
	I1016 18:27:15.119086  228034 system_pods.go:61] "etcd-missing-upgrade-294813" [f746cc1a-fb5d-4cdc-b210-0315370265d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:27:15.119109  228034 system_pods.go:61] "kube-apiserver-missing-upgrade-294813" [cd8bfe2c-a7f0-41d9-87bd-5fa475dd0a41] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:27:15.119120  228034 system_pods.go:61] "kube-controller-manager-missing-upgrade-294813" [04c6c421-8f6c-437b-a9a8-a77a64f219b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:27:15.119129  228034 system_pods.go:61] "kube-scheduler-missing-upgrade-294813" [f8a7b77c-c98a-420a-b547-11685ed4da2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:27:15.119137  228034 system_pods.go:61] "storage-provisioner" [102d119c-9435-43d1-a391-91f53fcd0414] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1016 18:27:15.119160  228034 system_pods.go:74] duration metric: took 4.70183ms to wait for pod list to return data ...
	I1016 18:27:15.119172  228034 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:27:15.121849  228034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:27:15.121875  228034 node_conditions.go:123] node cpu capacity is 8
	I1016 18:27:15.121887  228034 node_conditions.go:105] duration metric: took 2.709534ms to run NodePressure ...
	I1016 18:27:15.121949  228034 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1016 18:27:15.296916  228034 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:27:15.304962  228034 ops.go:34] apiserver oom_adj: -16
	I1016 18:27:15.304980  228034 kubeadm.go:601] duration metric: took 6.705877392s to restartPrimaryControlPlane
	I1016 18:27:15.304992  228034 kubeadm.go:402] duration metric: took 6.753120815s to StartCluster
	I1016 18:27:15.305011  228034 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:15.305086  228034 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:27:15.306439  228034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:15.306694  228034 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:27:15.306824  228034 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:27:15.306891  228034 addons.go:69] Setting storage-provisioner=true in profile "missing-upgrade-294813"
	I1016 18:27:15.306905  228034 addons.go:238] Setting addon storage-provisioner=true in "missing-upgrade-294813"
	W1016 18:27:15.306913  228034 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:27:15.306934  228034 config.go:182] Loaded profile config "missing-upgrade-294813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1016 18:27:15.306934  228034 addons.go:69] Setting default-storageclass=true in profile "missing-upgrade-294813"
	I1016 18:27:15.306944  228034 host.go:66] Checking if "missing-upgrade-294813" exists ...
	I1016 18:27:15.306988  228034 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "missing-upgrade-294813"
	I1016 18:27:15.307329  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:15.307411  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:15.308742  228034 out.go:179] * Verifying Kubernetes components...
	I1016 18:27:15.310434  228034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:15.331253  228034 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:27:15.331288  228034 kapi.go:59] client config for missing-upgrade-294813: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/client.crt", KeyFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/profiles/missing-upgrade-294813/client.key", CAFile:"/home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1016 18:27:15.331708  228034 addons.go:238] Setting addon default-storageclass=true in "missing-upgrade-294813"
	W1016 18:27:15.331743  228034 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:27:15.331775  228034 host.go:66] Checking if "missing-upgrade-294813" exists ...
	I1016 18:27:15.332268  228034 cli_runner.go:164] Run: docker container inspect missing-upgrade-294813 --format={{.State.Status}}
	I1016 18:27:15.332979  228034 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:15.333001  228034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:27:15.333058  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:15.360378  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:15.362359  228034 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:15.362380  228034 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:27:15.362447  228034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-294813
	I1016 18:27:15.385132  228034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/missing-upgrade-294813/id_rsa Username:docker}
	I1016 18:27:15.423549  228034 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:27:15.437413  228034 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:27:15.437490  228034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:15.450526  228034 api_server.go:72] duration metric: took 143.784439ms to wait for apiserver process to appear ...
	I1016 18:27:15.450553  228034 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:27:15.450577  228034 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:27:15.454869  228034 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:27:15.456029  228034 api_server.go:141] control plane version: v1.28.3
	I1016 18:27:15.456052  228034 api_server.go:131] duration metric: took 5.492129ms to wait for apiserver health ...
	I1016 18:27:15.456061  228034 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:27:15.459349  228034 system_pods.go:59] 5 kube-system pods found
	I1016 18:27:15.459382  228034 system_pods.go:61] "etcd-missing-upgrade-294813" [f746cc1a-fb5d-4cdc-b210-0315370265d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:27:15.459394  228034 system_pods.go:61] "kube-apiserver-missing-upgrade-294813" [cd8bfe2c-a7f0-41d9-87bd-5fa475dd0a41] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:27:15.459405  228034 system_pods.go:61] "kube-controller-manager-missing-upgrade-294813" [04c6c421-8f6c-437b-a9a8-a77a64f219b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:27:15.459414  228034 system_pods.go:61] "kube-scheduler-missing-upgrade-294813" [f8a7b77c-c98a-420a-b547-11685ed4da2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:27:15.459421  228034 system_pods.go:61] "storage-provisioner" [102d119c-9435-43d1-a391-91f53fcd0414] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1016 18:27:15.459429  228034 system_pods.go:74] duration metric: took 3.362075ms to wait for pod list to return data ...
	I1016 18:27:15.459444  228034 kubeadm.go:586] duration metric: took 152.708368ms to wait for: map[apiserver:true system_pods:true]
	I1016 18:27:15.459463  228034 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:27:15.461788  228034 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:27:15.461808  228034 node_conditions.go:123] node cpu capacity is 8
	I1016 18:27:15.461819  228034 node_conditions.go:105] duration metric: took 2.352461ms to run NodePressure ...
	I1016 18:27:15.461829  228034 start.go:241] waiting for startup goroutines ...
	I1016 18:27:15.462178  228034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:15.484591  228034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:15.778278  228034 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:27:15.780236  228034 addons.go:514] duration metric: took 473.410853ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:27:15.780283  228034 start.go:246] waiting for cluster config update ...
	I1016 18:27:15.780293  228034 start.go:255] writing updated cluster config ...
	I1016 18:27:15.780543  228034 ssh_runner.go:195] Run: rm -f paused
	I1016 18:27:15.828967  228034 start.go:624] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1016 18:27:15.830438  228034 out.go:203] 
	W1016 18:27:15.831793  228034 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1016 18:27:15.833168  228034 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1016 18:27:15.834485  228034 out.go:179] * Done! kubectl is now configured to use "missing-upgrade-294813" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:27:05 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:05.631465743Z" level=info msg="Started container" PID=2143 containerID=629f8cafda44b8ec7381595252a2df8abc41398a07371cc770c7d386b497cb87 description=kube-system/storage-provisioner/storage-provisioner id=7a8f4e7d-0183-4ac9-9524-3adb3dd12bdf name=/runtime.v1.RuntimeService/StartContainer sandboxID=bfe0a94cf7477347ab2816b290636ad6dbba2477581c4371588d303cff3aa206
	Oct 16 18:27:05 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:05.632700151Z" level=info msg="Started container" PID=2146 containerID=fef6385ad62120d124b0db4c976f32045c703977f00b910bfdbfe3d8e59cbd1a description=kube-system/coredns-5dd5756b68-kdcm7/coredns id=6f8f3e22-2c9c-4c67-91e4-a5a4ac566448 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e41e819e37093140743cc4cb904d951fa7b01efed059b9ec51a7a415539d3db9
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.654900563Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bbd23bb3-7a20-40d7-a30c-589d5c5a027f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.655010249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.660455347Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5fd9e72abc16f8ad738ed326aacba963beb80a8b2d2fc4ede26e7a392e4336f3 UID:a0840267-3a77-4fd9-8a8f-decbfcf3849a NetNS:/var/run/netns/2b7fdc30-e93c-470c-92c7-84d454ffe9ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000654580}] Aliases:map[]}"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.660486246Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.67070917Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5fd9e72abc16f8ad738ed326aacba963beb80a8b2d2fc4ede26e7a392e4336f3 UID:a0840267-3a77-4fd9-8a8f-decbfcf3849a NetNS:/var/run/netns/2b7fdc30-e93c-470c-92c7-84d454ffe9ac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000654580}] Aliases:map[]}"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.670910261Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.671704043Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.672640425Z" level=info msg="Ran pod sandbox 5fd9e72abc16f8ad738ed326aacba963beb80a8b2d2fc4ede26e7a392e4336f3 with infra container: default/busybox/POD" id=bbd23bb3-7a20-40d7-a30c-589d5c5a027f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.673889922Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6bd79537-6e1b-42df-80b9-ed6120fa8715 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.673998494Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6bd79537-6e1b-42df-80b9-ed6120fa8715 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.674033531Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6bd79537-6e1b-42df-80b9-ed6120fa8715 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.674509354Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e480aa51-3d9e-4127-968d-04b54111f82f name=/runtime.v1.ImageService/PullImage
	Oct 16 18:27:08 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:08.677132453Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.107216962Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e480aa51-3d9e-4127-968d-04b54111f82f name=/runtime.v1.ImageService/PullImage
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.108104597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=002713bf-070c-43fe-ad96-740008918fa2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.109763908Z" level=info msg="Creating container: default/busybox/busybox" id=62994d23-218d-48c6-b711-953f502f4bff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.110543598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.114307929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.11481912Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.140201894Z" level=info msg="Created container 1de87c9fa0810c4947f2308cf9519cf218cc16fae77cd4a7d7c2ec3a23e2a1ea: default/busybox/busybox" id=62994d23-218d-48c6-b711-953f502f4bff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.140852329Z" level=info msg="Starting container: 1de87c9fa0810c4947f2308cf9519cf218cc16fae77cd4a7d7c2ec3a23e2a1ea" id=11910917-11fb-488c-8c7d-a301b9a93cb9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:27:10 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:10.142575493Z" level=info msg="Started container" PID=2220 containerID=1de87c9fa0810c4947f2308cf9519cf218cc16fae77cd4a7d7c2ec3a23e2a1ea description=default/busybox/busybox id=11910917-11fb-488c-8c7d-a301b9a93cb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fd9e72abc16f8ad738ed326aacba963beb80a8b2d2fc4ede26e7a392e4336f3
	Oct 16 18:27:17 old-k8s-version-956814 crio[775]: time="2025-10-16T18:27:17.439645741Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1de87c9fa0810       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5fd9e72abc16f       busybox                                          default
	fef6385ad6212       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   e41e819e37093       coredns-5dd5756b68-kdcm7                         kube-system
	629f8cafda44b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   bfe0a94cf7477       storage-provisioner                              kube-system
	1441f5d38f2f1       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   304bdf6b2d57d       kindnet-94l8q                                    kube-system
	ebeea557ba319       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   1a7c64761886e       kube-proxy-nkwcm                                 kube-system
	08a87168a0f6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      44 seconds ago      Running             etcd                      0                   cae9f798e2aac       etcd-old-k8s-version-956814                      kube-system
	81dfa1d55f8d6       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      44 seconds ago      Running             kube-apiserver            0                   8badfe711bbd7       kube-apiserver-old-k8s-version-956814            kube-system
	60ec89fc2450b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      44 seconds ago      Running             kube-scheduler            0                   ee3855c0b85cf       kube-scheduler-old-k8s-version-956814            kube-system
	a4ba04429c92f       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      44 seconds ago      Running             kube-controller-manager   0                   c4d2fadc4c651       kube-controller-manager-old-k8s-version-956814   kube-system
	
	
	==> coredns [fef6385ad62120d124b0db4c976f32045c703977f00b910bfdbfe3d8e59cbd1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45220 - 59844 "HINFO IN 3624509675626090878.4354642584052350948. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.429038644s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-956814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-956814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-956814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:26:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-956814
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:27:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:27:09 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:27:09 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:27:09 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:27:09 +0000   Thu, 16 Oct 2025 18:27:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-956814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                16c7f49b-fe0a-4b26-a8a7-b5d233753b17
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-kdcm7                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-956814                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-94l8q                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-956814             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-956814    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-nkwcm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-956814             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 45s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-956814 event: Registered Node old-k8s-version-956814 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-956814 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [08a87168a0f6ce9df68a7537c14c3942d0c38d72b97700e4320c46cbbc898306] <==
	{"level":"info","ts":"2025-10-16T18:26:34.1265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-16T18:26:34.12665Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-16T18:26:34.127774Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T18:26:34.127858Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:26:34.127895Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:26:34.128038Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T18:26:34.128069Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T18:26:34.314154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-16T18:26:34.31422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-16T18:26:34.314236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-10-16T18:26:34.314251Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-10-16T18:26:34.314257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-16T18:26:34.314264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-10-16T18:26:34.314271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-16T18:26:34.31498Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:26:34.315604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:26:34.315637Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:26:34.315598Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-956814 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T18:26:34.315899Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:26:34.316003Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:26:34.316095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:26:34.315911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T18:26:34.316142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T18:26:34.316921Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-16T18:26:34.317066Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:27:19 up  1:09,  0 user,  load average: 3.01, 2.54, 1.56
	Linux old-k8s-version-956814 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1441f5d38f2f140f28d9618c90e949df9d6e414d5c0b176f06cfb763c23fccde] <==
	I1016 18:26:54.510384       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:26:54.510653       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:26:54.510810       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:26:54.510827       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:26:54.510853       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:26:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:26:54.806379       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:26:54.806517       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:26:54.905811       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:26:54.905990       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:26:55.206055       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:26:55.206098       1 metrics.go:72] Registering metrics
	I1016 18:26:55.206150       1 controller.go:711] "Syncing nftables rules"
	I1016 18:27:04.715795       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:27:04.715856       1 main.go:301] handling current node
	I1016 18:27:14.713088       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:27:14.713126       1 main.go:301] handling current node
	
	
	==> kube-apiserver [81dfa1d55f8d695a5718f4ce699ccdd798d16d1d39772cfbdaa3091624a40fe1] <==
	I1016 18:26:35.797828       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 18:26:35.800422       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 18:26:35.814764       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1016 18:26:35.814802       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 18:26:35.814983       1 aggregator.go:166] initial CRD sync complete...
	I1016 18:26:35.814993       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 18:26:35.814999       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:26:35.815008       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:26:35.836025       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:26:35.843757       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1016 18:26:36.698275       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:26:36.704116       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:26:36.704216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:26:37.267603       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:26:37.305387       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:26:37.410803       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:26:37.416344       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1016 18:26:37.417332       1 controller.go:624] quota admission added evaluator for: endpoints
	I1016 18:26:37.422172       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:26:37.792853       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 18:26:38.885121       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 18:26:38.898881       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:26:38.909895       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1016 18:26:51.350112       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 18:26:51.450019       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a4ba04429c92f0224bd14b5d7c416318447cbf120e7d06edfea522e5918f2eff] <==
	I1016 18:26:51.110557       1 shared_informer.go:318] Caches are synced for PV protection
	I1016 18:26:51.112792       1 shared_informer.go:318] Caches are synced for endpoint
	I1016 18:26:51.119076       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1016 18:26:51.141709       1 shared_informer.go:318] Caches are synced for persistent volume
	I1016 18:26:51.354265       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1016 18:26:51.461502       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nkwcm"
	I1016 18:26:51.463879       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-94l8q"
	I1016 18:26:51.483345       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:26:51.536650       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:26:51.536690       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 18:26:51.657629       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-g6nc2"
	I1016 18:26:51.663339       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-kdcm7"
	I1016 18:26:51.684235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="329.917908ms"
	I1016 18:26:51.705176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.858207ms"
	I1016 18:26:51.705408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.703µs"
	I1016 18:26:51.966895       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1016 18:26:51.978131       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-g6nc2"
	I1016 18:26:51.990673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.971381ms"
	I1016 18:26:52.003414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.672594ms"
	I1016 18:26:52.004307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.749µs"
	I1016 18:27:05.275599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.314µs"
	I1016 18:27:05.287246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.119µs"
	I1016 18:27:05.905833       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1016 18:27:06.072903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.695962ms"
	I1016 18:27:06.073100       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.357µs"
	
	
	==> kube-proxy [ebeea557ba31964804fab0bd41016abcc8d99fe531b8b0a75041505d6256bd7a] <==
	I1016 18:26:51.891989       1 server_others.go:69] "Using iptables proxy"
	I1016 18:26:51.907000       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1016 18:26:51.931952       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:26:51.934545       1 server_others.go:152] "Using iptables Proxier"
	I1016 18:26:51.934587       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 18:26:51.934596       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 18:26:51.934627       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 18:26:51.934906       1 server.go:846] "Version info" version="v1.28.0"
	I1016 18:26:51.934972       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:26:51.936137       1 config.go:315] "Starting node config controller"
	I1016 18:26:51.937796       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 18:26:51.936229       1 config.go:97] "Starting endpoint slice config controller"
	I1016 18:26:51.938881       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 18:26:51.939380       1 config.go:188] "Starting service config controller"
	I1016 18:26:51.939431       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 18:26:52.040252       1 shared_informer.go:318] Caches are synced for service config
	I1016 18:26:52.040314       1 shared_informer.go:318] Caches are synced for node config
	I1016 18:26:52.040352       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [60ec89fc2450b42700d8353eb1e6986a1bb2b8215745960d1e763052b9e88591] <==
	E1016 18:26:35.851142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1016 18:26:35.851011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1016 18:26:35.851034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:35.851045       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:35.851060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1016 18:26:35.851130       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1016 18:26:36.692268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1016 18:26:36.692312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1016 18:26:36.724749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:36.724786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1016 18:26:36.779600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1016 18:26:36.779639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1016 18:26:36.809200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1016 18:26:36.809245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1016 18:26:36.817227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:36.817266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1016 18:26:36.827543       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1016 18:26:36.827593       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 18:26:36.846056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:36.846100       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1016 18:26:36.881743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1016 18:26:36.881795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1016 18:26:37.101477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1016 18:26:37.101516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1016 18:26:39.724280       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 16 18:26:50 old-k8s-version-956814 kubelet[1398]: I1016 18:26:50.932103    1398 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.468447    1398 topology_manager.go:215] "Topology Admit Handler" podUID="42a87fa5-c9a9-4549-82ae-7026313269a8" podNamespace="kube-system" podName="kube-proxy-nkwcm"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.471482    1398 topology_manager.go:215] "Topology Admit Handler" podUID="f914e471-760c-4cc6-ad8e-b3c0372d9f38" podNamespace="kube-system" podName="kindnet-94l8q"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639257    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42a87fa5-c9a9-4549-82ae-7026313269a8-kube-proxy\") pod \"kube-proxy-nkwcm\" (UID: \"42a87fa5-c9a9-4549-82ae-7026313269a8\") " pod="kube-system/kube-proxy-nkwcm"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639332    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42a87fa5-c9a9-4549-82ae-7026313269a8-xtables-lock\") pod \"kube-proxy-nkwcm\" (UID: \"42a87fa5-c9a9-4549-82ae-7026313269a8\") " pod="kube-system/kube-proxy-nkwcm"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639364    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f914e471-760c-4cc6-ad8e-b3c0372d9f38-lib-modules\") pod \"kindnet-94l8q\" (UID: \"f914e471-760c-4cc6-ad8e-b3c0372d9f38\") " pod="kube-system/kindnet-94l8q"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639400    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsn4x\" (UniqueName: \"kubernetes.io/projected/42a87fa5-c9a9-4549-82ae-7026313269a8-kube-api-access-tsn4x\") pod \"kube-proxy-nkwcm\" (UID: \"42a87fa5-c9a9-4549-82ae-7026313269a8\") " pod="kube-system/kube-proxy-nkwcm"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639432    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f914e471-760c-4cc6-ad8e-b3c0372d9f38-cni-cfg\") pod \"kindnet-94l8q\" (UID: \"f914e471-760c-4cc6-ad8e-b3c0372d9f38\") " pod="kube-system/kindnet-94l8q"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639572    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42a87fa5-c9a9-4549-82ae-7026313269a8-lib-modules\") pod \"kube-proxy-nkwcm\" (UID: \"42a87fa5-c9a9-4549-82ae-7026313269a8\") " pod="kube-system/kube-proxy-nkwcm"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639624    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f914e471-760c-4cc6-ad8e-b3c0372d9f38-xtables-lock\") pod \"kindnet-94l8q\" (UID: \"f914e471-760c-4cc6-ad8e-b3c0372d9f38\") " pod="kube-system/kindnet-94l8q"
	Oct 16 18:26:51 old-k8s-version-956814 kubelet[1398]: I1016 18:26:51.639675    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtxv\" (UniqueName: \"kubernetes.io/projected/f914e471-760c-4cc6-ad8e-b3c0372d9f38-kube-api-access-bhtxv\") pod \"kindnet-94l8q\" (UID: \"f914e471-760c-4cc6-ad8e-b3c0372d9f38\") " pod="kube-system/kindnet-94l8q"
	Oct 16 18:26:52 old-k8s-version-956814 kubelet[1398]: I1016 18:26:52.019842    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nkwcm" podStartSLOduration=1.019786491 podCreationTimestamp="2025-10-16 18:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:26:52.019749651 +0000 UTC m=+13.159849248" watchObservedRunningTime="2025-10-16 18:26:52.019786491 +0000 UTC m=+13.159886099"
	Oct 16 18:26:55 old-k8s-version-956814 kubelet[1398]: I1016 18:26:55.027438    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-94l8q" podStartSLOduration=1.476679366 podCreationTimestamp="2025-10-16 18:26:51 +0000 UTC" firstStartedPulling="2025-10-16 18:26:51.781809144 +0000 UTC m=+12.921908744" lastFinishedPulling="2025-10-16 18:26:54.332510928 +0000 UTC m=+15.472610526" observedRunningTime="2025-10-16 18:26:55.027185367 +0000 UTC m=+16.167284996" watchObservedRunningTime="2025-10-16 18:26:55.027381148 +0000 UTC m=+16.167480755"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.250944    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.275993    1398 topology_manager.go:215] "Topology Admit Handler" podUID="843a7578-3aeb-49b4-afcf-aa7d0c26f7f2" podNamespace="kube-system" podName="coredns-5dd5756b68-kdcm7"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.277399    1398 topology_manager.go:215] "Topology Admit Handler" podUID="58886065-9960-40b4-964e-f767d2460754" podNamespace="kube-system" podName="storage-provisioner"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.429188    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/843a7578-3aeb-49b4-afcf-aa7d0c26f7f2-config-volume\") pod \"coredns-5dd5756b68-kdcm7\" (UID: \"843a7578-3aeb-49b4-afcf-aa7d0c26f7f2\") " pod="kube-system/coredns-5dd5756b68-kdcm7"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.429248    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzcxp\" (UniqueName: \"kubernetes.io/projected/843a7578-3aeb-49b4-afcf-aa7d0c26f7f2-kube-api-access-kzcxp\") pod \"coredns-5dd5756b68-kdcm7\" (UID: \"843a7578-3aeb-49b4-afcf-aa7d0c26f7f2\") " pod="kube-system/coredns-5dd5756b68-kdcm7"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.429283    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58886065-9960-40b4-964e-f767d2460754-tmp\") pod \"storage-provisioner\" (UID: \"58886065-9960-40b4-964e-f767d2460754\") " pod="kube-system/storage-provisioner"
	Oct 16 18:27:05 old-k8s-version-956814 kubelet[1398]: I1016 18:27:05.429348    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnt7c\" (UniqueName: \"kubernetes.io/projected/58886065-9960-40b4-964e-f767d2460754-kube-api-access-rnt7c\") pod \"storage-provisioner\" (UID: \"58886065-9960-40b4-964e-f767d2460754\") " pod="kube-system/storage-provisioner"
	Oct 16 18:27:06 old-k8s-version-956814 kubelet[1398]: I1016 18:27:06.065130    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kdcm7" podStartSLOduration=15.065076005 podCreationTimestamp="2025-10-16 18:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:27:06.064920464 +0000 UTC m=+27.205020147" watchObservedRunningTime="2025-10-16 18:27:06.065076005 +0000 UTC m=+27.205175613"
	Oct 16 18:27:06 old-k8s-version-956814 kubelet[1398]: I1016 18:27:06.065269    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.065240148 podCreationTimestamp="2025-10-16 18:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:27:06.053528516 +0000 UTC m=+27.193628124" watchObservedRunningTime="2025-10-16 18:27:06.065240148 +0000 UTC m=+27.205339756"
	Oct 16 18:27:08 old-k8s-version-956814 kubelet[1398]: I1016 18:27:08.353235    1398 topology_manager.go:215] "Topology Admit Handler" podUID="a0840267-3a77-4fd9-8a8f-decbfcf3849a" podNamespace="default" podName="busybox"
	Oct 16 18:27:08 old-k8s-version-956814 kubelet[1398]: I1016 18:27:08.447168    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9tl\" (UniqueName: \"kubernetes.io/projected/a0840267-3a77-4fd9-8a8f-decbfcf3849a-kube-api-access-vt9tl\") pod \"busybox\" (UID: \"a0840267-3a77-4fd9-8a8f-decbfcf3849a\") " pod="default/busybox"
	Oct 16 18:27:11 old-k8s-version-956814 kubelet[1398]: I1016 18:27:11.067548    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6341661859999999 podCreationTimestamp="2025-10-16 18:27:08 +0000 UTC" firstStartedPulling="2025-10-16 18:27:08.674196775 +0000 UTC m=+29.814296362" lastFinishedPulling="2025-10-16 18:27:10.107523143 +0000 UTC m=+31.247622733" observedRunningTime="2025-10-16 18:27:11.067313067 +0000 UTC m=+32.207412675" watchObservedRunningTime="2025-10-16 18:27:11.067492557 +0000 UTC m=+32.207592184"
	
	
	==> storage-provisioner [629f8cafda44b8ec7381595252a2df8abc41398a07371cc770c7d386b497cb87] <==
	I1016 18:27:05.645822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:27:05.657047       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:27:05.657114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 18:27:05.666064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:27:05.666258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_9738ff67-b4f2-43d8-8bf0-2350986d3fc2!
	I1016 18:27:05.666605       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e1593ca-3024-4a18-b57d-738a19d42c4d", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-956814_9738ff67-b4f2-43d8-8bf0-2350986d3fc2 became leader
	I1016 18:27:05.766895       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_9738ff67-b4f2-43d8-8bf0-2350986d3fc2!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-956814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.597911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:28:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-808539 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-808539 describe deploy/metrics-server -n kube-system: exit status 1 (59.117192ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-808539 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-808539
helpers_test.go:243: (dbg) docker inspect no-preload-808539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	        "Created": "2025-10-16T18:27:19.34518913Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 235511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:27:19.394838157Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hosts",
	        "LogPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674-json.log",
	        "Name": "/no-preload-808539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-808539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-808539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	                "LowerDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/merged",
	                "UpperDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/diff",
	                "WorkDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-808539",
	                "Source": "/var/lib/docker/volumes/no-preload-808539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-808539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-808539",
	                "name.minikube.sigs.k8s.io": "no-preload-808539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f3e2556213f67e6d9c81693b34c26b4275a89bced754c144e2e188e674a7762d",
	            "SandboxKey": "/var/run/docker/netns/f3e2556213f6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-808539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:46:2d:9f:e9:e4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38dc5e7162482fea5b37cb1ee9d81ad023804ad94f7487798d7ddee0954e300e",
	                    "EndpointID": "b89b4e7e28f3cf4a26d89c17641f621c923b3db4b5292bc6f262520f54d6abba",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-808539",
	                        "ee665d228e59"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-808539 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-808539 logs -n 25: (1.185246858s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ ssh     │ force-systemd-flag-607466 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-607466 │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:25 UTC │
	│ start   │ -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p force-systemd-flag-607466                                                                                                                                                                                                                  │ force-systemd-flag-607466 │ jenkins │ v1.37.0 │ 16 Oct 25 18:25 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-294813    │ jenkins │ v1.32.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ stop    │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p NoKubernetes-200573 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ cert-options-817096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p cert-options-817096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p cert-options-817096                                                                                                                                                                                                                        │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:27:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:27:37.688948  238805 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:27:37.689249  238805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:27:37.689260  238805 out.go:374] Setting ErrFile to fd 2...
	I1016 18:27:37.689265  238805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:27:37.689458  238805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:27:37.690004  238805 out.go:368] Setting JSON to false
	I1016 18:27:37.691367  238805 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4206,"bootTime":1760635052,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:27:37.691464  238805 start.go:141] virtualization: kvm guest
	I1016 18:27:37.693609  238805 out.go:179] * [old-k8s-version-956814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:27:37.695435  238805 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:27:37.695437  238805 notify.go:220] Checking for updates...
	I1016 18:27:37.697041  238805 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:27:37.698440  238805 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:27:37.700295  238805 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:27:37.701814  238805 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:27:37.703084  238805 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:27:37.705148  238805 config.go:182] Loaded profile config "old-k8s-version-956814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:27:37.707164  238805 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1016 18:27:37.708474  238805 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:27:37.737921  238805 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:27:37.738079  238805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:27:37.799367  238805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-16 18:27:37.787963245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:27:37.799469  238805 docker.go:318] overlay module found
	I1016 18:27:37.801235  238805 out.go:179] * Using the docker driver based on existing profile
	I1016 18:27:37.802767  238805 start.go:305] selected driver: docker
	I1016 18:27:37.802788  238805 start.go:925] validating driver "docker" against &{Name:old-k8s-version-956814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-956814 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:27:37.802916  238805 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:27:37.803694  238805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:27:37.860734  238805 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-16 18:27:37.850465079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:27:37.861026  238805 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:27:37.861054  238805 cni.go:84] Creating CNI manager for ""
	I1016 18:27:37.861094  238805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:27:37.861123  238805 start.go:349] cluster config:
	{Name:old-k8s-version-956814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-956814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:27:37.863129  238805 out.go:179] * Starting "old-k8s-version-956814" primary control-plane node in "old-k8s-version-956814" cluster
	I1016 18:27:37.864647  238805 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:27:37.866043  238805 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:27:37.867530  238805 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 18:27:37.867567  238805 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:27:37.867580  238805 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1016 18:27:37.867607  238805 cache.go:58] Caching tarball of preloaded images
	I1016 18:27:37.867702  238805 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:27:37.867732  238805 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1016 18:27:37.867840  238805 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/config.json ...
	I1016 18:27:37.889113  238805 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:27:37.889135  238805 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:27:37.889155  238805 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:27:37.889185  238805 start.go:360] acquireMachinesLock for old-k8s-version-956814: {Name:mk32193ea3659460348d6597a3d6352935ee1c27 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:27:37.889257  238805 start.go:364] duration metric: took 42.739µs to acquireMachinesLock for "old-k8s-version-956814"
	I1016 18:27:37.889278  238805 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:27:37.889287  238805 fix.go:54] fixHost starting: 
	I1016 18:27:37.889548  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:37.909479  238805 fix.go:112] recreateIfNeeded on old-k8s-version-956814: state=Stopped err=<nil>
	W1016 18:27:37.909506  238805 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:27:37.022220  234806 out.go:252]   - Generating certificates and keys ...
	I1016 18:27:37.022384  234806 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:27:37.022506  234806 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:27:37.286542  234806 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:27:37.594517  234806 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:27:37.850410  234806 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:27:37.908572  234806 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:27:38.244062  234806 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:27:38.244219  234806 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-808539] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1016 18:27:34.100016  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:27:34.100066  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:34.471583  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:59220->192.168.76.2:8443: read: connection reset by peer
	I1016 18:27:34.596213  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:34.596623  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:35.096432  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:35.096848  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:35.596558  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:35.597014  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:36.096752  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:36.097246  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:36.595916  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:36.596388  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:37.096054  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:37.096477  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:37.595811  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:37.596226  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:38.095786  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:38.096230  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:38.595859  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:38.596255  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:38.855562  234806 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:27:38.855738  234806 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-808539] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1016 18:27:39.221262  234806 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:27:39.365415  234806 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:27:39.773039  234806 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:27:39.773156  234806 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:27:40.030970  234806 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:27:40.086856  234806 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:27:40.132470  234806 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:27:40.667205  234806 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:27:41.523348  234806 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:27:41.524036  234806 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:27:41.528190  234806 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:27:37.911604  238805 out.go:252] * Restarting existing docker container for "old-k8s-version-956814" ...
	I1016 18:27:37.911662  238805 cli_runner.go:164] Run: docker start old-k8s-version-956814
	I1016 18:27:38.162294  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:38.184426  238805 kic.go:430] container "old-k8s-version-956814" state is running.
	I1016 18:27:38.184914  238805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-956814
	I1016 18:27:38.206656  238805 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/config.json ...
	I1016 18:27:38.206914  238805 machine.go:93] provisionDockerMachine start ...
	I1016 18:27:38.206979  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:38.227886  238805 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:38.228168  238805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1016 18:27:38.228184  238805 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:27:38.228913  238805 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50968->127.0.0.1:33063: read: connection reset by peer
	I1016 18:27:41.367121  238805 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-956814
	
	I1016 18:27:41.367146  238805 ubuntu.go:182] provisioning hostname "old-k8s-version-956814"
	I1016 18:27:41.367199  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:41.386342  238805 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:41.386602  238805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1016 18:27:41.386618  238805 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-956814 && echo "old-k8s-version-956814" | sudo tee /etc/hostname
	I1016 18:27:41.537176  238805 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-956814
	
	I1016 18:27:41.537248  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:41.559787  238805 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:41.559997  238805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1016 18:27:41.560019  238805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-956814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-956814/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-956814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:27:41.702108  238805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:27:41.702138  238805 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:27:41.702179  238805 ubuntu.go:190] setting up certificates
	I1016 18:27:41.702192  238805 provision.go:84] configureAuth start
	I1016 18:27:41.702248  238805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-956814
	I1016 18:27:41.721202  238805 provision.go:143] copyHostCerts
	I1016 18:27:41.721267  238805 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:27:41.721286  238805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:27:41.721419  238805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:27:41.721546  238805 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:27:41.721558  238805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:27:41.721598  238805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:27:41.721694  238805 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:27:41.721705  238805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:27:41.721772  238805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:27:41.721847  238805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-956814 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-956814]
	I1016 18:27:41.964934  238805 provision.go:177] copyRemoteCerts
	I1016 18:27:41.964999  238805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:27:41.965031  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:41.984237  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:42.085119  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:27:42.104224  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1016 18:27:42.123078  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:27:42.141794  238805 provision.go:87] duration metric: took 439.587449ms to configureAuth
	I1016 18:27:42.141830  238805 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:27:42.142045  238805 config.go:182] Loaded profile config "old-k8s-version-956814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:27:42.142176  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:42.162109  238805 main.go:141] libmachine: Using SSH client type: native
	I1016 18:27:42.162371  238805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1016 18:27:42.162388  238805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:27:42.461873  238805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:27:42.461903  238805 machine.go:96] duration metric: took 4.254972799s to provisionDockerMachine
	I1016 18:27:42.461918  238805 start.go:293] postStartSetup for "old-k8s-version-956814" (driver="docker")
	I1016 18:27:42.461934  238805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:27:42.462001  238805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:27:42.462047  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:42.482222  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:42.580550  238805 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:27:42.584140  238805 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:27:42.584173  238805 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:27:42.584185  238805 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:27:42.584245  238805 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:27:42.584326  238805 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:27:42.584413  238805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:27:42.591989  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:27:42.609550  238805 start.go:296] duration metric: took 147.617748ms for postStartSetup
	I1016 18:27:42.609624  238805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:27:42.609668  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:42.630879  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:41.530387  234806 out.go:252]   - Booting up control plane ...
	I1016 18:27:41.530518  234806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:27:41.530615  234806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:27:41.532416  234806 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:27:41.548615  234806 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:27:41.548801  234806 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:27:41.557573  234806 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:27:41.558064  234806 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:27:41.558127  234806 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:27:41.661091  234806 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:27:41.661228  234806 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:27:43.161954  234806 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.5009072s
	I1016 18:27:43.164762  234806 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:27:43.164915  234806 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1016 18:27:43.165055  234806 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:27:43.165190  234806 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:27:42.727097  238805 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:27:42.731837  238805 fix.go:56] duration metric: took 4.842542408s for fixHost
	I1016 18:27:42.731861  238805 start.go:83] releasing machines lock for "old-k8s-version-956814", held for 4.842592707s
	I1016 18:27:42.731937  238805 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-956814
	I1016 18:27:42.750831  238805 ssh_runner.go:195] Run: cat /version.json
	I1016 18:27:42.750875  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:42.750992  238805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:27:42.751071  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:42.770848  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:42.770971  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:42.865205  238805 ssh_runner.go:195] Run: systemctl --version
	I1016 18:27:42.920760  238805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:27:42.955915  238805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:27:42.960912  238805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:27:42.960976  238805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:27:42.969591  238805 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:27:42.969615  238805 start.go:495] detecting cgroup driver to use...
	I1016 18:27:42.969645  238805 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:27:42.969695  238805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:27:42.986157  238805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:27:42.999232  238805 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:27:42.999316  238805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:27:43.014180  238805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:27:43.026892  238805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:27:43.107809  238805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:27:43.190123  238805 docker.go:234] disabling docker service ...
	I1016 18:27:43.190194  238805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:27:43.204798  238805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:27:43.218282  238805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:27:43.308756  238805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:27:43.414895  238805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:27:43.431548  238805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:27:43.449945  238805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1016 18:27:43.450007  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.460069  238805 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:27:43.460138  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.469448  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.478844  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.489313  238805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:27:43.498207  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.507798  238805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.517012  238805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:27:43.527680  238805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:27:43.535453  238805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:27:43.543628  238805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:43.638123  238805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:27:43.761665  238805 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:27:43.761755  238805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:27:43.766362  238805 start.go:563] Will wait 60s for crictl version
	I1016 18:27:43.766431  238805 ssh_runner.go:195] Run: which crictl
	I1016 18:27:43.770252  238805 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:27:43.798459  238805 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:27:43.798543  238805 ssh_runner.go:195] Run: crio --version
	I1016 18:27:43.832166  238805 ssh_runner.go:195] Run: crio --version
	I1016 18:27:43.868032  238805 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1016 18:27:39.096114  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:39.096550  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:39.596206  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:39.596655  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:40.095876  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:40.096300  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:40.595811  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:40.596209  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:41.095816  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:41.096248  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:41.595858  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:41.596199  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:42.095825  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:42.096200  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:42.595863  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:42.596235  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:43.095833  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:43.096245  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:43.595793  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:43.596168  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:43.869560  238805 cli_runner.go:164] Run: docker network inspect old-k8s-version-956814 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:27:43.891523  238805 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1016 18:27:43.895971  238805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:27:43.907984  238805 kubeadm.go:883] updating cluster {Name:old-k8s-version-956814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-956814 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:27:43.908110  238805 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1016 18:27:43.908191  238805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:27:43.948969  238805 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:27:43.948994  238805 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:27:43.949049  238805 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:27:43.977204  238805 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:27:43.977238  238805 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:27:43.977248  238805 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1016 18:27:43.977406  238805 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-956814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-956814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:27:43.977487  238805 ssh_runner.go:195] Run: crio config
	I1016 18:27:44.039682  238805 cni.go:84] Creating CNI manager for ""
	I1016 18:27:44.039703  238805 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:27:44.039730  238805 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:27:44.039759  238805 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-956814 NodeName:old-k8s-version-956814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:27:44.039936  238805 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-956814"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:27:44.040006  238805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1016 18:27:44.048835  238805 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:27:44.048922  238805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:27:44.057982  238805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1016 18:27:44.072320  238805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:27:44.085958  238805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1016 18:27:44.099507  238805 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:27:44.103548  238805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:27:44.114214  238805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:44.210423  238805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:27:44.233654  238805 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814 for IP: 192.168.103.2
	I1016 18:27:44.233673  238805 certs.go:195] generating shared ca certs ...
	I1016 18:27:44.233693  238805 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:44.233864  238805 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:27:44.233914  238805 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:27:44.233926  238805 certs.go:257] generating profile certs ...
	I1016 18:27:44.234021  238805 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.key
	I1016 18:27:44.234090  238805 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/apiserver.key.c3a94c9c
	I1016 18:27:44.234138  238805 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/proxy-client.key
	I1016 18:27:44.234305  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:27:44.234343  238805 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:27:44.234355  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:27:44.234388  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:27:44.234419  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:27:44.234449  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:27:44.234510  238805 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:27:44.235322  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:27:44.257491  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:27:44.279201  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:27:44.299497  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:27:44.319939  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1016 18:27:44.344917  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:27:44.364448  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:27:44.382567  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1016 18:27:44.400973  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:27:44.419889  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:27:44.439307  238805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:27:44.458556  238805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:27:44.472361  238805 ssh_runner.go:195] Run: openssl version
	I1016 18:27:44.482287  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:27:44.491459  238805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:27:44.495503  238805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:27:44.495553  238805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:27:44.538554  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:27:44.548350  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:27:44.559223  238805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:44.563517  238805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:44.563582  238805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:27:44.610338  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:27:44.619973  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:27:44.630973  238805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:27:44.636160  238805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:27:44.636218  238805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:27:44.677579  238805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:27:44.686782  238805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:27:44.691063  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:27:44.735948  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:27:44.786407  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:27:44.842657  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:27:44.899609  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:27:44.954570  238805 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:27:45.003624  238805 kubeadm.go:400] StartCluster: {Name:old-k8s-version-956814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-956814 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:27:45.003700  238805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:27:45.003768  238805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:27:45.035481  238805 cri.go:89] found id: "e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3"
	I1016 18:27:45.035507  238805 cri.go:89] found id: "e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0"
	I1016 18:27:45.035513  238805 cri.go:89] found id: "58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511"
	I1016 18:27:45.035516  238805 cri.go:89] found id: "04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163"
	I1016 18:27:45.035520  238805 cri.go:89] found id: ""
	I1016 18:27:45.035564  238805 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:27:45.052560  238805 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:27:45Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:27:45.052653  238805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:27:45.064249  238805 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:27:45.064270  238805 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:27:45.064329  238805 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:27:45.072748  238805 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:27:45.073676  238805 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-956814" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:27:45.074261  238805 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-956814" cluster setting kubeconfig missing "old-k8s-version-956814" context setting]
	I1016 18:27:45.074992  238805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:45.076502  238805 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:27:45.085687  238805 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1016 18:27:45.085730  238805 kubeadm.go:601] duration metric: took 21.453505ms to restartPrimaryControlPlane
	I1016 18:27:45.085742  238805 kubeadm.go:402] duration metric: took 82.125114ms to StartCluster
	I1016 18:27:45.085761  238805 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:45.085826  238805 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:27:45.086779  238805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:45.087007  238805 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:27:45.087232  238805 config.go:182] Loaded profile config "old-k8s-version-956814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:27:45.087200  238805 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:27:45.087285  238805 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-956814"
	I1016 18:27:45.087294  238805 addons.go:69] Setting dashboard=true in profile "old-k8s-version-956814"
	I1016 18:27:45.087303  238805 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-956814"
	I1016 18:27:45.087304  238805 addons.go:238] Setting addon dashboard=true in "old-k8s-version-956814"
	W1016 18:27:45.087311  238805 addons.go:247] addon storage-provisioner should already be in state true
	W1016 18:27:45.087312  238805 addons.go:247] addon dashboard should already be in state true
	I1016 18:27:45.087335  238805 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:27:45.087335  238805 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:27:45.087366  238805 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-956814"
	I1016 18:27:45.087392  238805 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-956814"
	I1016 18:27:45.087781  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:45.087816  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:45.087972  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:45.090267  238805 out.go:179] * Verifying Kubernetes components...
	I1016 18:27:45.091594  238805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:45.115826  238805 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-956814"
	W1016 18:27:45.115908  238805 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:27:45.115960  238805 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:27:45.116622  238805 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:27:45.117624  238805 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 18:27:45.118817  238805 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:27:45.120418  238805 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 18:27:45.120452  238805 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:45.120470  238805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:27:45.120537  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:45.121849  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 18:27:45.121871  238805 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 18:27:45.121933  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:45.153947  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:45.154071  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:45.154550  238805 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:45.154564  238805 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:27:45.154615  238805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:27:45.191844  238805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:27:45.276540  238805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:27:45.287303  238805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:45.294165  238805 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-956814" to be "Ready" ...
	I1016 18:27:45.295283  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 18:27:45.295342  238805 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 18:27:45.314408  238805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:45.314528  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 18:27:45.314546  238805 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 18:27:45.331846  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 18:27:45.331873  238805 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 18:27:45.369066  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 18:27:45.369088  238805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 18:27:45.389669  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 18:27:45.389693  238805 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 18:27:45.407501  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 18:27:45.407522  238805 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 18:27:45.421498  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 18:27:45.421571  238805 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 18:27:45.435233  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 18:27:45.435277  238805 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 18:27:45.451322  238805 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:27:45.451356  238805 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 18:27:45.465129  238805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:27:46.779989  238805 node_ready.go:49] node "old-k8s-version-956814" is "Ready"
	I1016 18:27:46.780025  238805 node_ready.go:38] duration metric: took 1.485829262s for node "old-k8s-version-956814" to be "Ready" ...
	I1016 18:27:46.780042  238805 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:27:46.780097  238805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:27:47.544456  238805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.257080263s)
	I1016 18:27:47.544532  238805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.23008893s)
	I1016 18:27:44.306873  234806 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.142029483s
	I1016 18:27:45.696117  234806 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.531301893s
	I1016 18:27:47.666858  234806 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501928043s
	I1016 18:27:47.683351  234806 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:27:47.698485  234806 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:27:47.711932  234806 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:27:47.712493  234806 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-808539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:27:47.725562  234806 kubeadm.go:318] [bootstrap-token] Using token: 9h9tt4.6klnaj12fsc1v1n9
	I1016 18:27:47.947001  238805 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.481825836s)
	I1016 18:27:47.947087  238805 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.166969538s)
	I1016 18:27:47.947145  238805 api_server.go:72] duration metric: took 2.86010253s to wait for apiserver process to appear ...
	I1016 18:27:47.947157  238805 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:27:47.947176  238805 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:27:47.948489  238805 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-956814 addons enable metrics-server
	
	I1016 18:27:47.949794  238805 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1016 18:27:47.727964  234806 out.go:252]   - Configuring RBAC rules ...
	I1016 18:27:47.728167  234806 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:27:47.733039  234806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:27:47.743848  234806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:27:47.748556  234806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:27:47.754114  234806 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:27:47.762872  234806 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:27:48.074157  234806 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:27:44.096076  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:44.096513  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:44.595803  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:44.596176  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:45.095790  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:45.097901  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:45.596591  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:45.597070  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:46.096436  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:46.096882  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:46.596618  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:46.597021  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:47.096756  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:47.097254  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:47.595797  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:47.596259  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:48.095859  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:48.096284  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:48.595853  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:48.596260  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:48.495184  234806 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:27:49.074669  234806 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:27:49.075806  234806 kubeadm.go:318] 
	I1016 18:27:49.075901  234806 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:27:49.075911  234806 kubeadm.go:318] 
	I1016 18:27:49.076009  234806 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:27:49.076017  234806 kubeadm.go:318] 
	I1016 18:27:49.076055  234806 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:27:49.076132  234806 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:27:49.076206  234806 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:27:49.076237  234806 kubeadm.go:318] 
	I1016 18:27:49.076321  234806 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:27:49.076333  234806 kubeadm.go:318] 
	I1016 18:27:49.076430  234806 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:27:49.076449  234806 kubeadm.go:318] 
	I1016 18:27:49.076518  234806 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:27:49.076640  234806 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:27:49.076787  234806 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:27:49.076803  234806 kubeadm.go:318] 
	I1016 18:27:49.076909  234806 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:27:49.077060  234806 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:27:49.077087  234806 kubeadm.go:318] 
	I1016 18:27:49.077210  234806 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 9h9tt4.6klnaj12fsc1v1n9 \
	I1016 18:27:49.077357  234806 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:27:49.077398  234806 kubeadm.go:318] 	--control-plane 
	I1016 18:27:49.077419  234806 kubeadm.go:318] 
	I1016 18:27:49.077528  234806 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:27:49.077537  234806 kubeadm.go:318] 
	I1016 18:27:49.077657  234806 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 9h9tt4.6klnaj12fsc1v1n9 \
	I1016 18:27:49.077814  234806 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:27:49.079737  234806 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:27:49.079894  234806 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:27:49.079926  234806 cni.go:84] Creating CNI manager for ""
	I1016 18:27:49.079939  234806 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:27:49.082440  234806 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:27:47.950923  238805 addons.go:514] duration metric: took 2.863727812s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1016 18:27:47.951379  238805 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1016 18:27:47.951399  238805 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1016 18:27:48.447882  238805 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:27:48.452530  238805 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:27:48.453852  238805 api_server.go:141] control plane version: v1.28.0
	I1016 18:27:48.453889  238805 api_server.go:131] duration metric: took 506.723589ms to wait for apiserver health ...
	I1016 18:27:48.453900  238805 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:27:48.458475  238805 system_pods.go:59] 8 kube-system pods found
	I1016 18:27:48.458513  238805 system_pods.go:61] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:48.458521  238805 system_pods.go:61] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:27:48.458526  238805 system_pods.go:61] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:48.458532  238805 system_pods.go:61] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:27:48.458545  238805 system_pods.go:61] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:27:48.458550  238805 system_pods.go:61] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:48.458556  238805 system_pods.go:61] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:27:48.458559  238805 system_pods.go:61] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Running
	I1016 18:27:48.458565  238805 system_pods.go:74] duration metric: took 4.659422ms to wait for pod list to return data ...
	I1016 18:27:48.458575  238805 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:27:48.460984  238805 default_sa.go:45] found service account: "default"
	I1016 18:27:48.461012  238805 default_sa.go:55] duration metric: took 2.430048ms for default service account to be created ...
	I1016 18:27:48.461024  238805 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:27:48.464863  238805 system_pods.go:86] 8 kube-system pods found
	I1016 18:27:48.464897  238805 system_pods.go:89] "coredns-5dd5756b68-kdcm7" [843a7578-3aeb-49b4-afcf-aa7d0c26f7f2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:27:48.464908  238805 system_pods.go:89] "etcd-old-k8s-version-956814" [df912ec6-1f46-496f-8651-3d9e192ac464] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:27:48.464917  238805 system_pods.go:89] "kindnet-94l8q" [f914e471-760c-4cc6-ad8e-b3c0372d9f38] Running
	I1016 18:27:48.464927  238805 system_pods.go:89] "kube-apiserver-old-k8s-version-956814" [996ba7eb-ac5d-4500-9246-998ab92fdde9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:27:48.464939  238805 system_pods.go:89] "kube-controller-manager-old-k8s-version-956814" [2acfa7fd-18c6-49eb-ab39-997dae3776da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:27:48.464946  238805 system_pods.go:89] "kube-proxy-nkwcm" [42a87fa5-c9a9-4549-82ae-7026313269a8] Running
	I1016 18:27:48.464954  238805 system_pods.go:89] "kube-scheduler-old-k8s-version-956814" [68968c7d-d4ba-4b40-a014-805d7d5acdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:27:48.464963  238805 system_pods.go:89] "storage-provisioner" [58886065-9960-40b4-964e-f767d2460754] Running
	I1016 18:27:48.464970  238805 system_pods.go:126] duration metric: took 3.940741ms to wait for k8s-apps to be running ...
	I1016 18:27:48.464982  238805 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:27:48.465057  238805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:27:48.479428  238805 system_svc.go:56] duration metric: took 14.438996ms WaitForService to wait for kubelet
	I1016 18:27:48.479457  238805 kubeadm.go:586] duration metric: took 3.392412622s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:27:48.479480  238805 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:27:48.483909  238805 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:27:48.483943  238805 node_conditions.go:123] node cpu capacity is 8
	I1016 18:27:48.483959  238805 node_conditions.go:105] duration metric: took 4.471801ms to run NodePressure ...
	I1016 18:27:48.483973  238805 start.go:241] waiting for startup goroutines ...
	I1016 18:27:48.483982  238805 start.go:246] waiting for cluster config update ...
	I1016 18:27:48.483995  238805 start.go:255] writing updated cluster config ...
	I1016 18:27:48.484311  238805 ssh_runner.go:195] Run: rm -f paused
	I1016 18:27:48.489164  238805 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:27:48.493854  238805 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-kdcm7" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 18:27:50.499527  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:27:52.499906  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	I1016 18:27:49.083708  234806 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:27:49.088910  234806 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:27:49.088930  234806 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:27:49.102973  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:27:49.318437  234806 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:27:49.318512  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:49.318545  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-808539 minikube.k8s.io/updated_at=2025_10_16T18_27_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=no-preload-808539 minikube.k8s.io/primary=true
	I1016 18:27:49.406899  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:49.406982  234806 ops.go:34] apiserver oom_adj: -16
	I1016 18:27:49.907924  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:50.407757  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:50.907989  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:51.407936  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:51.907343  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:52.407062  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:52.907541  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:53.407460  234806 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:27:53.483353  234806 kubeadm.go:1113] duration metric: took 4.164899972s to wait for elevateKubeSystemPrivileges
	I1016 18:27:53.483384  234806 kubeadm.go:402] duration metric: took 16.871421628s to StartCluster
	I1016 18:27:53.483400  234806 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:53.483478  234806 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:27:53.484821  234806 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:27:53.485088  234806 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:27:53.485108  234806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:27:53.485135  234806 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:27:53.485251  234806 addons.go:69] Setting storage-provisioner=true in profile "no-preload-808539"
	I1016 18:27:53.485278  234806 addons.go:238] Setting addon storage-provisioner=true in "no-preload-808539"
	I1016 18:27:53.485311  234806 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:27:53.485324  234806 addons.go:69] Setting default-storageclass=true in profile "no-preload-808539"
	I1016 18:27:53.485336  234806 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-808539"
	I1016 18:27:53.485315  234806 host.go:66] Checking if "no-preload-808539" exists ...
	I1016 18:27:53.485702  234806 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:27:53.485848  234806 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:27:53.486503  234806 out.go:179] * Verifying Kubernetes components...
	I1016 18:27:53.487915  234806 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:27:53.509948  234806 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:27:53.510339  234806 addons.go:238] Setting addon default-storageclass=true in "no-preload-808539"
	I1016 18:27:53.510374  234806 host.go:66] Checking if "no-preload-808539" exists ...
	I1016 18:27:53.510699  234806 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:27:53.511375  234806 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:53.511397  234806 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:27:53.511447  234806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:27:53.537658  234806 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:53.537676  234806 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:27:53.537741  234806 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:27:53.543209  234806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:27:53.561556  234806 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:27:53.587365  234806 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:27:53.627363  234806 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:27:53.658563  234806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:27:53.682942  234806 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:27:53.775689  234806 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1016 18:27:53.777017  234806 node_ready.go:35] waiting up to 6m0s for node "no-preload-808539" to be "Ready" ...
	I1016 18:27:54.007783  234806 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:27:49.096013  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:49.096465  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:49.596096  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:49.596486  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:50.095818  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:50.096195  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:50.595818  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:50.596228  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:51.095879  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:51.096313  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:51.595807  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:27:51.596212  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:27:52.095877  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:27:52.095957  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:27:52.124978  228782 cri.go:89] found id: "d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	I1016 18:27:52.125027  228782 cri.go:89] found id: ""
	I1016 18:27:52.125042  228782 logs.go:282] 1 containers: [d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20]
	I1016 18:27:52.125089  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:27:52.129114  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:27:52.129167  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:27:52.156534  228782 cri.go:89] found id: ""
	I1016 18:27:52.156561  228782 logs.go:282] 0 containers: []
	W1016 18:27:52.156568  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:27:52.156574  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:27:52.156619  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:27:52.184876  228782 cri.go:89] found id: ""
	I1016 18:27:52.184901  228782 logs.go:282] 0 containers: []
	W1016 18:27:52.184911  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:27:52.184918  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:27:52.184999  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:27:52.213753  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:27:52.213778  228782 cri.go:89] found id: ""
	I1016 18:27:52.213785  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:27:52.213837  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:27:52.218159  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:27:52.218227  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:27:52.246986  228782 cri.go:89] found id: ""
	I1016 18:27:52.247007  228782 logs.go:282] 0 containers: []
	W1016 18:27:52.247015  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:27:52.247022  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:27:52.247094  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:27:52.276010  228782 cri.go:89] found id: "13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:27:52.276032  228782 cri.go:89] found id: ""
	I1016 18:27:52.276038  228782 logs.go:282] 1 containers: [13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae]
	I1016 18:27:52.276083  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:27:52.280210  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:27:52.280282  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:27:52.307400  228782 cri.go:89] found id: ""
	I1016 18:27:52.307427  228782 logs.go:282] 0 containers: []
	W1016 18:27:52.307435  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:27:52.307440  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:27:52.307488  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:27:52.336291  228782 cri.go:89] found id: ""
	I1016 18:27:52.336316  228782 logs.go:282] 0 containers: []
	W1016 18:27:52.336327  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:27:52.336338  228782 logs.go:123] Gathering logs for kube-controller-manager [13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae] ...
	I1016 18:27:52.336352  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:27:52.366447  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:27:52.366515  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:27:52.409252  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:27:52.409285  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:27:52.443689  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:27:52.443749  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:27:52.514291  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:27:52.514325  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:27:52.529650  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:27:52.529677  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:27:52.601289  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:27:52.601322  228782 logs.go:123] Gathering logs for kube-apiserver [d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20] ...
	I1016 18:27:52.601342  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	I1016 18:27:52.635530  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:27:52.635562  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	W1016 18:27:54.499984  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:27:57.000889  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	I1016 18:27:54.008990  234806 addons.go:514] duration metric: took 523.864072ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:27:54.280052  234806 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-808539" context rescaled to 1 replicas
	W1016 18:27:55.780128  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	W1016 18:27:57.780859  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	I1016 18:27:55.181768  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1016 18:27:59.499470  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:28:01.500112  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:27:59.780911  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	W1016 18:28:02.280456  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	I1016 18:28:00.182050  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:28:00.182114  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:28:00.182173  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:28:00.220674  228782 cri.go:89] found id: "cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:00.220697  228782 cri.go:89] found id: "d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	I1016 18:28:00.220703  228782 cri.go:89] found id: ""
	I1016 18:28:00.220816  228782 logs.go:282] 2 containers: [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20]
	I1016 18:28:00.220894  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:00.226093  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:00.231231  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:28:00.231298  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:28:00.278093  228782 cri.go:89] found id: ""
	I1016 18:28:00.278125  228782 logs.go:282] 0 containers: []
	W1016 18:28:00.278144  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:28:00.278152  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:28:00.278211  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:28:00.317150  228782 cri.go:89] found id: ""
	I1016 18:28:00.317239  228782 logs.go:282] 0 containers: []
	W1016 18:28:00.317252  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:28:00.317260  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:28:00.317326  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:28:00.351321  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:00.351346  228782 cri.go:89] found id: ""
	I1016 18:28:00.351357  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:28:00.351414  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:00.357111  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:28:00.357185  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:28:00.393595  228782 cri.go:89] found id: ""
	I1016 18:28:00.393625  228782 logs.go:282] 0 containers: []
	W1016 18:28:00.393635  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:28:00.393642  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:28:00.393736  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:28:00.428911  228782 cri.go:89] found id: "d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:00.428934  228782 cri.go:89] found id: "13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:00.428940  228782 cri.go:89] found id: ""
	I1016 18:28:00.428948  228782 logs.go:282] 2 containers: [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae]
	I1016 18:28:00.429050  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:00.434264  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:00.439389  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:28:00.439462  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:28:00.474810  228782 cri.go:89] found id: ""
	I1016 18:28:00.474838  228782 logs.go:282] 0 containers: []
	W1016 18:28:00.474848  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:28:00.474855  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:28:00.474912  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:28:00.514326  228782 cri.go:89] found id: ""
	I1016 18:28:00.514369  228782 logs.go:282] 0 containers: []
	W1016 18:28:00.514378  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:28:00.514392  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:28:00.514406  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:28:00.533978  228782 logs.go:123] Gathering logs for kube-apiserver [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674] ...
	I1016 18:28:00.534011  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:00.577339  228782 logs.go:123] Gathering logs for kube-apiserver [d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20] ...
	I1016 18:28:00.577373  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	I1016 18:28:00.623909  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:28:00.623944  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:00.684465  228782 logs.go:123] Gathering logs for kube-controller-manager [13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae] ...
	I1016 18:28:00.684500  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:00.716078  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:28:00.716104  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:28:00.752458  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:28:00.752500  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:28:04.002789  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:28:06.500024  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:28:04.781232  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	W1016 18:28:07.280611  234806 node_ready.go:57] node "no-preload-808539" has "Ready":"False" status (will retry)
	I1016 18:28:08.280086  234806 node_ready.go:49] node "no-preload-808539" is "Ready"
	I1016 18:28:08.280117  234806 node_ready.go:38] duration metric: took 14.503070382s for node "no-preload-808539" to be "Ready" ...
	I1016 18:28:08.280136  234806 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:28:08.280179  234806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:28:08.292525  234806 api_server.go:72] duration metric: took 14.807404912s to wait for apiserver process to appear ...
	I1016 18:28:08.292549  234806 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:28:08.292567  234806 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:28:08.298047  234806 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:28:08.299252  234806 api_server.go:141] control plane version: v1.34.1
	I1016 18:28:08.299276  234806 api_server.go:131] duration metric: took 6.720966ms to wait for apiserver health ...
	I1016 18:28:08.299287  234806 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:28:08.302412  234806 system_pods.go:59] 8 kube-system pods found
	I1016 18:28:08.302443  234806 system_pods.go:61] "coredns-66bc5c9577-ntqqg" [3d28093d-4751-4cac-a926-0ec629262ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:28:08.302453  234806 system_pods.go:61] "etcd-no-preload-808539" [b2f36555-3443-41a8-bf64-af5b3c83b89d] Running
	I1016 18:28:08.302461  234806 system_pods.go:61] "kindnet-kxznd" [8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464] Running
	I1016 18:28:08.302467  234806 system_pods.go:61] "kube-apiserver-no-preload-808539" [29912a75-56f2-4367-a8b1-79988c823988] Running
	I1016 18:28:08.302474  234806 system_pods.go:61] "kube-controller-manager-no-preload-808539" [2618eabd-27fc-4084-928f-a6c2214d7abe] Running
	I1016 18:28:08.302480  234806 system_pods.go:61] "kube-proxy-68kl9" [99922fc0-a3ab-4328-95e9-9f2dea6573c9] Running
	I1016 18:28:08.302488  234806 system_pods.go:61] "kube-scheduler-no-preload-808539" [8449927b-d55d-4dc1-ba67-dc1968bfbf84] Running
	I1016 18:28:08.302497  234806 system_pods.go:61] "storage-provisioner" [633408d0-ddc3-43e6-8f33-9fa9f394758d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:28:08.302510  234806 system_pods.go:74] duration metric: took 3.215875ms to wait for pod list to return data ...
	I1016 18:28:08.302522  234806 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:28:08.305483  234806 default_sa.go:45] found service account: "default"
	I1016 18:28:08.305502  234806 default_sa.go:55] duration metric: took 2.973851ms for default service account to be created ...
	I1016 18:28:08.305511  234806 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:28:08.308605  234806 system_pods.go:86] 8 kube-system pods found
	I1016 18:28:08.308636  234806 system_pods.go:89] "coredns-66bc5c9577-ntqqg" [3d28093d-4751-4cac-a926-0ec629262ca6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:28:08.308644  234806 system_pods.go:89] "etcd-no-preload-808539" [b2f36555-3443-41a8-bf64-af5b3c83b89d] Running
	I1016 18:28:08.308652  234806 system_pods.go:89] "kindnet-kxznd" [8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464] Running
	I1016 18:28:08.308658  234806 system_pods.go:89] "kube-apiserver-no-preload-808539" [29912a75-56f2-4367-a8b1-79988c823988] Running
	I1016 18:28:08.308664  234806 system_pods.go:89] "kube-controller-manager-no-preload-808539" [2618eabd-27fc-4084-928f-a6c2214d7abe] Running
	I1016 18:28:08.308669  234806 system_pods.go:89] "kube-proxy-68kl9" [99922fc0-a3ab-4328-95e9-9f2dea6573c9] Running
	I1016 18:28:08.308675  234806 system_pods.go:89] "kube-scheduler-no-preload-808539" [8449927b-d55d-4dc1-ba67-dc1968bfbf84] Running
	I1016 18:28:08.308685  234806 system_pods.go:89] "storage-provisioner" [633408d0-ddc3-43e6-8f33-9fa9f394758d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:28:08.308707  234806 retry.go:31] will retry after 203.871062ms: missing components: kube-dns
	I1016 18:28:08.517135  234806 system_pods.go:86] 8 kube-system pods found
	I1016 18:28:08.517167  234806 system_pods.go:89] "coredns-66bc5c9577-ntqqg" [3d28093d-4751-4cac-a926-0ec629262ca6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:28:08.517174  234806 system_pods.go:89] "etcd-no-preload-808539" [b2f36555-3443-41a8-bf64-af5b3c83b89d] Running
	I1016 18:28:08.517180  234806 system_pods.go:89] "kindnet-kxznd" [8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464] Running
	I1016 18:28:08.517183  234806 system_pods.go:89] "kube-apiserver-no-preload-808539" [29912a75-56f2-4367-a8b1-79988c823988] Running
	I1016 18:28:08.517188  234806 system_pods.go:89] "kube-controller-manager-no-preload-808539" [2618eabd-27fc-4084-928f-a6c2214d7abe] Running
	I1016 18:28:08.517191  234806 system_pods.go:89] "kube-proxy-68kl9" [99922fc0-a3ab-4328-95e9-9f2dea6573c9] Running
	I1016 18:28:08.517194  234806 system_pods.go:89] "kube-scheduler-no-preload-808539" [8449927b-d55d-4dc1-ba67-dc1968bfbf84] Running
	I1016 18:28:08.517197  234806 system_pods.go:89] "storage-provisioner" [633408d0-ddc3-43e6-8f33-9fa9f394758d] Running
	I1016 18:28:08.517204  234806 system_pods.go:126] duration metric: took 211.687915ms to wait for k8s-apps to be running ...
	I1016 18:28:08.517211  234806 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:28:08.517253  234806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:28:08.531113  234806 system_svc.go:56] duration metric: took 13.890071ms WaitForService to wait for kubelet
	I1016 18:28:08.531141  234806 kubeadm.go:586] duration metric: took 15.046025938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:28:08.531161  234806 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:28:08.534281  234806 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:28:08.534305  234806 node_conditions.go:123] node cpu capacity is 8
	I1016 18:28:08.534317  234806 node_conditions.go:105] duration metric: took 3.150254ms to run NodePressure ...
	I1016 18:28:08.534331  234806 start.go:241] waiting for startup goroutines ...
	I1016 18:28:08.534338  234806 start.go:246] waiting for cluster config update ...
	I1016 18:28:08.534347  234806 start.go:255] writing updated cluster config ...
	I1016 18:28:08.534581  234806 ssh_runner.go:195] Run: rm -f paused
	I1016 18:28:08.538770  234806 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:28:08.542445  234806 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ntqqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.548004  234806 pod_ready.go:94] pod "coredns-66bc5c9577-ntqqg" is "Ready"
	I1016 18:28:09.548052  234806 pod_ready.go:86] duration metric: took 1.005583834s for pod "coredns-66bc5c9577-ntqqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.550658  234806 pod_ready.go:83] waiting for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.554682  234806 pod_ready.go:94] pod "etcd-no-preload-808539" is "Ready"
	I1016 18:28:09.554704  234806 pod_ready.go:86] duration metric: took 4.02706ms for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.556668  234806 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.560466  234806 pod_ready.go:94] pod "kube-apiserver-no-preload-808539" is "Ready"
	I1016 18:28:09.560484  234806 pod_ready.go:86] duration metric: took 3.795265ms for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.562235  234806 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.746482  234806 pod_ready.go:94] pod "kube-controller-manager-no-preload-808539" is "Ready"
	I1016 18:28:09.746515  234806 pod_ready.go:86] duration metric: took 184.261643ms for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:09.946746  234806 pod_ready.go:83] waiting for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:10.346254  234806 pod_ready.go:94] pod "kube-proxy-68kl9" is "Ready"
	I1016 18:28:10.346281  234806 pod_ready.go:86] duration metric: took 399.511495ms for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:10.547122  234806 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:10.946211  234806 pod_ready.go:94] pod "kube-scheduler-no-preload-808539" is "Ready"
	I1016 18:28:10.946243  234806 pod_ready.go:86] duration metric: took 399.095434ms for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:28:10.946259  234806 pod_ready.go:40] duration metric: took 2.407458302s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:28:10.992089  234806 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:28:10.995422  234806 out.go:179] * Done! kubectl is now configured to use "no-preload-808539" cluster and "default" namespace by default
	W1016 18:28:08.999319  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:28:11.000344  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	I1016 18:28:10.822004  228782 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.069479284s)
	W1016 18:28:10.822057  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1016 18:28:10.822067  228782 logs.go:123] Gathering logs for kube-controller-manager [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929] ...
	I1016 18:28:10.822082  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:10.849866  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:28:10.849892  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:28:10.894233  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:28:10.894267  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:28:13.468740  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1016 18:28:13.499890  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	W1016 18:28:16.000148  238805 pod_ready.go:104] pod "coredns-5dd5756b68-kdcm7" is not "Ready", error: <nil>
	I1016 18:28:14.454089  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:51044->192.168.76.2:8443: read: connection reset by peer
	I1016 18:28:14.454158  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:28:14.454223  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:28:14.484212  228782 cri.go:89] found id: "cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:14.484236  228782 cri.go:89] found id: "d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	I1016 18:28:14.484241  228782 cri.go:89] found id: ""
	I1016 18:28:14.484250  228782 logs.go:282] 2 containers: [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20]
	I1016 18:28:14.484303  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:14.488529  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:14.492794  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:28:14.492859  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:28:14.521658  228782 cri.go:89] found id: ""
	I1016 18:28:14.521681  228782 logs.go:282] 0 containers: []
	W1016 18:28:14.521689  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:28:14.521694  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:28:14.521785  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:28:14.550403  228782 cri.go:89] found id: ""
	I1016 18:28:14.550430  228782 logs.go:282] 0 containers: []
	W1016 18:28:14.550447  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:28:14.550456  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:28:14.550513  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:28:14.579551  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:14.579572  228782 cri.go:89] found id: ""
	I1016 18:28:14.579579  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:28:14.579628  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:14.583832  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:28:14.583897  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:28:14.611937  228782 cri.go:89] found id: ""
	I1016 18:28:14.611964  228782 logs.go:282] 0 containers: []
	W1016 18:28:14.611972  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:28:14.611978  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:28:14.612034  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:28:14.641538  228782 cri.go:89] found id: "d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:14.641565  228782 cri.go:89] found id: "13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:14.641574  228782 cri.go:89] found id: ""
	I1016 18:28:14.641583  228782 logs.go:282] 2 containers: [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae]
	I1016 18:28:14.641640  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:14.645754  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:14.649688  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:28:14.649774  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:28:14.679419  228782 cri.go:89] found id: ""
	I1016 18:28:14.679446  228782 logs.go:282] 0 containers: []
	W1016 18:28:14.679456  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:28:14.679464  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:28:14.679514  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:28:14.707471  228782 cri.go:89] found id: ""
	I1016 18:28:14.707498  228782 logs.go:282] 0 containers: []
	W1016 18:28:14.707505  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:28:14.707518  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:28:14.707527  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:28:14.725133  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:28:14.725170  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:28:14.789555  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:28:14.789578  228782 logs.go:123] Gathering logs for kube-controller-manager [13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae] ...
	I1016 18:28:14.789595  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:14.820788  228782 logs.go:123] Gathering logs for kube-apiserver [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674] ...
	I1016 18:28:14.820822  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:14.853821  228782 logs.go:123] Gathering logs for kube-apiserver [d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20] ...
	I1016 18:28:14.853847  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	W1016 18:28:14.880683  228782 logs.go:130] failed kube-apiserver [d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20": Process exited with status 1
	stdout:
	
	stderr:
	E1016 18:28:14.878350    1615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20\": container with ID starting with d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20 not found: ID does not exist" containerID="d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	time="2025-10-16T18:28:14Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20\": container with ID starting with d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1016 18:28:14.878350    1615 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20\": container with ID starting with d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20 not found: ID does not exist" containerID="d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20"
	time="2025-10-16T18:28:14Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20\": container with ID starting with d8d94147eaddf2118cc5b5f421addbff34729b052b180038ab9b7805f196ff20 not found: ID does not exist"
	
	** /stderr **
	I1016 18:28:14.880708  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:28:14.880738  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:14.923296  228782 logs.go:123] Gathering logs for kube-controller-manager [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929] ...
	I1016 18:28:14.923325  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:14.950380  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:28:14.950402  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:28:14.994822  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:28:14.994852  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:28:15.029153  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:28:15.029180  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:28:17.598784  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:28:17.599273  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:28:17.599326  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:28:17.599371  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:28:17.627992  228782 cri.go:89] found id: "cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:17.628013  228782 cri.go:89] found id: ""
	I1016 18:28:17.628021  228782 logs.go:282] 1 containers: [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674]
	I1016 18:28:17.628077  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:17.632267  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:28:17.632329  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:28:17.661511  228782 cri.go:89] found id: ""
	I1016 18:28:17.661539  228782 logs.go:282] 0 containers: []
	W1016 18:28:17.661551  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:28:17.661558  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:28:17.661619  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:28:17.690655  228782 cri.go:89] found id: ""
	I1016 18:28:17.690684  228782 logs.go:282] 0 containers: []
	W1016 18:28:17.690694  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:28:17.690700  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:28:17.690775  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:28:17.718972  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:17.718994  228782 cri.go:89] found id: ""
	I1016 18:28:17.719007  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:28:17.719057  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:17.723385  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:28:17.723443  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:28:17.751334  228782 cri.go:89] found id: ""
	I1016 18:28:17.751365  228782 logs.go:282] 0 containers: []
	W1016 18:28:17.751377  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:28:17.751385  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:28:17.751454  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:28:17.779501  228782 cri.go:89] found id: "d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:17.779526  228782 cri.go:89] found id: "13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:17.779532  228782 cri.go:89] found id: ""
	I1016 18:28:17.779550  228782 logs.go:282] 2 containers: [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae]
	I1016 18:28:17.779604  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:17.784254  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:17.787908  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:28:17.787970  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:28:17.816746  228782 cri.go:89] found id: ""
	I1016 18:28:17.816775  228782 logs.go:282] 0 containers: []
	W1016 18:28:17.816786  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:28:17.816794  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:28:17.816859  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:28:17.844474  228782 cri.go:89] found id: ""
	I1016 18:28:17.844502  228782 logs.go:282] 0 containers: []
	W1016 18:28:17.844510  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:28:17.844522  228782 logs.go:123] Gathering logs for kube-apiserver [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674] ...
	I1016 18:28:17.844532  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:17.878207  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:28:17.878238  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:17.926242  228782 logs.go:123] Gathering logs for kube-controller-manager [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929] ...
	I1016 18:28:17.926273  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:17.953396  228782 logs.go:123] Gathering logs for kube-controller-manager [13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae] ...
	I1016 18:28:17.953419  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13e204bd5108593613aac4101add245b76cbecc0ad3e096b9477f6c9fddc01ae"
	I1016 18:28:17.982063  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:28:17.982086  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:28:18.025204  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:28:18.025239  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:28:18.058957  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:28:18.058989  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:28:18.134449  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:28:18.134492  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:28:18.149585  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:28:18.149616  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:28:18.208209  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Oct 16 18:28:08 no-preload-808539 crio[767]: time="2025-10-16T18:28:08.362193108Z" level=info msg="Starting container: 1225b0dd4985a32485eee24e8d2e2880efa5a2129bdf22f40936b1dc30e4cb9d" id=240d5b88-4618-41a7-9ac5-134d00fb2757 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:08 no-preload-808539 crio[767]: time="2025-10-16T18:28:08.364304344Z" level=info msg="Started container" PID=2932 containerID=1225b0dd4985a32485eee24e8d2e2880efa5a2129bdf22f40936b1dc30e4cb9d description=kube-system/coredns-66bc5c9577-ntqqg/coredns id=240d5b88-4618-41a7-9ac5-134d00fb2757 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4cc028fcc03f8ed75af2b7ee3eb4ba7e1fa4bc36b3fe94bea5170abd4c9d350b
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.496413761Z" level=info msg="Running pod sandbox: default/busybox/POD" id=2327ddb7-876a-426e-acbc-b5ef6e7515e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.49658761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.502337334Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4e9cf561403ab4772b35a2468507c8737e5d5fefee4a2a1979c62f37022dfde4 UID:92e14652-216c-4f68-9dcc-f986c05ef8d4 NetNS:/var/run/netns/5f386b4d-c819-4b0c-90c6-cf0d986c4f0e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000174a50}] Aliases:map[]}"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.502368332Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.513310963Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4e9cf561403ab4772b35a2468507c8737e5d5fefee4a2a1979c62f37022dfde4 UID:92e14652-216c-4f68-9dcc-f986c05ef8d4 NetNS:/var/run/netns/5f386b4d-c819-4b0c-90c6-cf0d986c4f0e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000174a50}] Aliases:map[]}"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.513498487Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.514541891Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.515534758Z" level=info msg="Ran pod sandbox 4e9cf561403ab4772b35a2468507c8737e5d5fefee4a2a1979c62f37022dfde4 with infra container: default/busybox/POD" id=2327ddb7-876a-426e-acbc-b5ef6e7515e7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.516984287Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=061ad51f-823b-49f9-aace-0d7104858679 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.517145103Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=061ad51f-823b-49f9-aace-0d7104858679 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.517178639Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=061ad51f-823b-49f9-aace-0d7104858679 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.517773186Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=df719bde-a571-4086-a4af-6de06b456967 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:28:11 no-preload-808539 crio[767]: time="2025-10-16T18:28:11.519318033Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.891633675Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=df719bde-a571-4086-a4af-6de06b456967 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.892258844Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=89a71804-e7e7-4230-aa61-485c629f942e name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.893725498Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c8f31929-a8d8-46fd-acc2-7084ecd1c42f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.896894797Z" level=info msg="Creating container: default/busybox/busybox" id=c19a780e-3937-4f1b-9dde-5e04efb3500b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.897540233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.901505609Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.902044824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.928412376Z" level=info msg="Created container 0c3cd150efda56d2c19eddd8c0bc0cfa43f91d8f8bad5f1814eec23816996bba: default/busybox/busybox" id=c19a780e-3937-4f1b-9dde-5e04efb3500b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.92911347Z" level=info msg="Starting container: 0c3cd150efda56d2c19eddd8c0bc0cfa43f91d8f8bad5f1814eec23816996bba" id=c90b1830-2d07-4ed6-b3b9-17ccf1cc822e name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:12 no-preload-808539 crio[767]: time="2025-10-16T18:28:12.930887698Z" level=info msg="Started container" PID=3009 containerID=0c3cd150efda56d2c19eddd8c0bc0cfa43f91d8f8bad5f1814eec23816996bba description=default/busybox/busybox id=c90b1830-2d07-4ed6-b3b9-17ccf1cc822e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e9cf561403ab4772b35a2468507c8737e5d5fefee4a2a1979c62f37022dfde4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0c3cd150efda5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4e9cf561403ab       busybox                                     default
	1225b0dd4985a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   4cc028fcc03f8       coredns-66bc5c9577-ntqqg                    kube-system
	6b7cf9f139455       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   e5448591175e3       storage-provisioner                         kube-system
	ac66b5a0b2e4a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   7a139c82b2039       kindnet-kxznd                               kube-system
	d0bc553d49222       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   836c0a7a2ee06       kube-proxy-68kl9                            kube-system
	19ecc3206cc07       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      37 seconds ago      Running             kube-scheduler            0                   d74a29c281ddd       kube-scheduler-no-preload-808539            kube-system
	22e9b7ce1d0f7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      37 seconds ago      Running             kube-controller-manager   0                   94d18a51ae884       kube-controller-manager-no-preload-808539   kube-system
	7d726e64cb3ce       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      37 seconds ago      Running             kube-apiserver            0                   7e94e3d1a954d       kube-apiserver-no-preload-808539            kube-system
	0c916767a72ba       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      37 seconds ago      Running             etcd                      0                   d41e136bc7c7a       etcd-no-preload-808539                      kube-system
	
	
	==> coredns [1225b0dd4985a32485eee24e8d2e2880efa5a2129bdf22f40936b1dc30e4cb9d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43566 - 20260 "HINFO IN 5677730136873025346.3721977922894644098. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024441435s
	
	
	==> describe nodes <==
	Name:               no-preload-808539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-808539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-808539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_27_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:27:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-808539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:28:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:28:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:28:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:28:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:28:18 +0000   Thu, 16 Oct 2025 18:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-808539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                738a1706-7fde-4f71-a519-e3178e828487
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-ntqqg                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-808539                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-kxznd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-808539             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-808539    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-68kl9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-808539             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node no-preload-808539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node no-preload-808539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node no-preload-808539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node no-preload-808539 event: Registered Node no-preload-808539 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-808539 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [0c916767a72ba4d38dc9627e5ed50f0ffcd62f5176cbb0340bd63e7eaf3954ac] <==
	{"level":"warn","ts":"2025-10-16T18:27:44.791169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.798864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.806483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.815439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.824156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.837265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.846464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.853801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.861832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.870522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.880769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.891959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.903339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.914280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.923710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.933228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.944063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.955220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.966809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.974449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.987002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.990876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:44.997875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:45.006365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:27:45.072369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60154","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:28:20 up  1:10,  0 user,  load average: 2.31, 2.45, 1.59
	Linux no-preload-808539 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ac66b5a0b2e4a9b7608ffc12bceab80079507f44f049f1acfc5a31606517edce] <==
	I1016 18:27:57.456705       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:27:57.456963       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:27:57.457126       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:27:57.457143       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:27:57.457163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:27:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:27:57.657170       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:27:57.657238       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:27:57.657248       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:27:57.657746       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:27:57.857796       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:27:57.857821       1 metrics.go:72] Registering metrics
	I1016 18:27:57.857889       1 controller.go:711] "Syncing nftables rules"
	I1016 18:28:07.662818       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:28:07.662877       1 main.go:301] handling current node
	I1016 18:28:17.659845       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:28:17.659877       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d726e64cb3cec9fa4f5bac5e5b1dbc6aa62e2e82cfa406234a36dfe7c5df5ac] <==
	I1016 18:27:45.730313       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:27:45.730491       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 18:27:45.734131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1016 18:27:45.737466       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:27:45.737743       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:27:45.776875       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:27:45.938333       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:27:46.636154       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:27:46.649600       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:27:46.649623       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:27:47.263488       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:27:47.314453       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:27:47.436438       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:27:47.446372       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1016 18:27:47.447968       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:27:47.454376       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:27:47.667538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:27:48.480002       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:27:48.493998       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:27:48.503883       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:27:52.670270       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:27:52.674394       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:27:53.717350       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:27:53.767731       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1016 18:28:19.278439       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:53278: use of closed network connection
	
	
	==> kube-controller-manager [22e9b7ce1d0f7834e8bbcb761fcc14ebcf3a6f771eab269d1fd77da309e46501] <==
	I1016 18:27:52.663275       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:27:52.663317       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:27:52.663348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:27:52.663363       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:27:52.663371       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:27:52.663627       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:27:52.663792       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:27:52.663821       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:27:52.663904       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:27:52.664076       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 18:27:52.664120       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:27:52.664165       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:27:52.664818       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:27:52.664969       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:27:52.665062       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:27:52.669675       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:27:52.669744       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:27:52.669762       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:27:52.669804       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:27:52.669813       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:27:52.669822       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:27:52.669921       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 18:27:52.676091       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-808539" podCIDRs=["10.244.0.0/24"]
	I1016 18:27:52.692508       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:28:12.617111       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d0bc553d49222bc773035795fde222de8e2c961f70cf785c45b1b1f97579f055] <==
	I1016 18:27:55.693529       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:27:55.747704       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:27:55.848848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:27:55.848888       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:27:55.849007       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:27:55.869527       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:27:55.869610       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:27:55.875880       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:27:55.876249       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:27:55.876287       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:27:55.877782       1 config.go:200] "Starting service config controller"
	I1016 18:27:55.877800       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:27:55.877896       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:27:55.877948       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:27:55.877914       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:27:55.877994       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:27:55.878000       1 config.go:309] "Starting node config controller"
	I1016 18:27:55.878009       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:27:55.878016       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:27:55.977986       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:27:55.978628       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:27:55.978640       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19ecc3206cc0724b764a56ad787efd86a3bb51c3a9e5bbe48c6ab4f150f166b5] <==
	E1016 18:27:45.694610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:27:45.694642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:27:45.694704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:27:45.694751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:27:45.694756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:27:45.694831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:27:45.694857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:27:45.694858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:27:45.694920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:27:45.694933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:27:45.694961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:27:45.695046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:27:46.551751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:27:46.559141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:27:46.567368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:27:46.707831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:27:46.729013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:27:46.734152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 18:27:46.835769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:27:46.880993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:27:46.914364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:27:46.914504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:27:46.946572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:27:47.033345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1016 18:27:49.391577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:27:53 no-preload-808539 kubelet[2336]: I1016 18:27:53.853188    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krn5h\" (UniqueName: \"kubernetes.io/projected/8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464-kube-api-access-krn5h\") pod \"kindnet-kxznd\" (UID: \"8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464\") " pod="kube-system/kindnet-kxznd"
	Oct 16 18:27:53 no-preload-808539 kubelet[2336]: I1016 18:27:53.853255    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99922fc0-a3ab-4328-95e9-9f2dea6573c9-xtables-lock\") pod \"kube-proxy-68kl9\" (UID: \"99922fc0-a3ab-4328-95e9-9f2dea6573c9\") " pod="kube-system/kube-proxy-68kl9"
	Oct 16 18:27:53 no-preload-808539 kubelet[2336]: I1016 18:27:53.853307    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464-lib-modules\") pod \"kindnet-kxznd\" (UID: \"8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464\") " pod="kube-system/kindnet-kxznd"
	Oct 16 18:27:53 no-preload-808539 kubelet[2336]: I1016 18:27:53.853401    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99922fc0-a3ab-4328-95e9-9f2dea6573c9-lib-modules\") pod \"kube-proxy-68kl9\" (UID: \"99922fc0-a3ab-4328-95e9-9f2dea6573c9\") " pod="kube-system/kube-proxy-68kl9"
	Oct 16 18:27:53 no-preload-808539 kubelet[2336]: I1016 18:27:53.853445    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464-cni-cfg\") pod \"kindnet-kxznd\" (UID: \"8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464\") " pod="kube-system/kindnet-kxznd"
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.955761    2336 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.955882    2336 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/99922fc0-a3ab-4328-95e9-9f2dea6573c9-kube-proxy podName:99922fc0-a3ab-4328-95e9-9f2dea6573c9 nodeName:}" failed. No retries permitted until 2025-10-16 18:27:55.455856849 +0000 UTC m=+7.222431391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/99922fc0-a3ab-4328-95e9-9f2dea6573c9-kube-proxy") pod "kube-proxy-68kl9" (UID: "99922fc0-a3ab-4328-95e9-9f2dea6573c9") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961062    2336 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961097    2336 projected.go:196] Error preparing data for projected volume kube-api-access-krn5h for pod kube-system/kindnet-kxznd: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961175    2336 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464-kube-api-access-krn5h podName:8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464 nodeName:}" failed. No retries permitted until 2025-10-16 18:27:55.461158366 +0000 UTC m=+7.227732902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-krn5h" (UniqueName: "kubernetes.io/projected/8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464-kube-api-access-krn5h") pod "kindnet-kxznd" (UID: "8f4ae10c-9947-42dd-b9d6-a7fe6b4a4464") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961341    2336 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961356    2336 projected.go:196] Error preparing data for projected volume kube-api-access-f4cks for pod kube-system/kube-proxy-68kl9: failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:54 no-preload-808539 kubelet[2336]: E1016 18:27:54.961387    2336 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99922fc0-a3ab-4328-95e9-9f2dea6573c9-kube-api-access-f4cks podName:99922fc0-a3ab-4328-95e9-9f2dea6573c9 nodeName:}" failed. No retries permitted until 2025-10-16 18:27:55.461378127 +0000 UTC m=+7.227952657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f4cks" (UniqueName: "kubernetes.io/projected/99922fc0-a3ab-4328-95e9-9f2dea6573c9-kube-api-access-f4cks") pod "kube-proxy-68kl9" (UID: "99922fc0-a3ab-4328-95e9-9f2dea6573c9") : failed to sync configmap cache: timed out waiting for the condition
	Oct 16 18:27:57 no-preload-808539 kubelet[2336]: I1016 18:27:57.375777    2336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-68kl9" podStartSLOduration=4.375753185 podStartE2EDuration="4.375753185s" podCreationTimestamp="2025-10-16 18:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:27:56.381217292 +0000 UTC m=+8.147791859" watchObservedRunningTime="2025-10-16 18:27:57.375753185 +0000 UTC m=+9.142327734"
	Oct 16 18:27:57 no-preload-808539 kubelet[2336]: I1016 18:27:57.375947    2336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kxznd" podStartSLOduration=2.777544565 podStartE2EDuration="4.375935071s" podCreationTimestamp="2025-10-16 18:27:53 +0000 UTC" firstStartedPulling="2025-10-16 18:27:55.614622088 +0000 UTC m=+7.381196617" lastFinishedPulling="2025-10-16 18:27:57.213012578 +0000 UTC m=+8.979587123" observedRunningTime="2025-10-16 18:27:57.375923692 +0000 UTC m=+9.142498259" watchObservedRunningTime="2025-10-16 18:27:57.375935071 +0000 UTC m=+9.142509655"
	Oct 16 18:28:07 no-preload-808539 kubelet[2336]: I1016 18:28:07.976377    2336 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.048987    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g8nh\" (UniqueName: \"kubernetes.io/projected/3d28093d-4751-4cac-a926-0ec629262ca6-kube-api-access-7g8nh\") pod \"coredns-66bc5c9577-ntqqg\" (UID: \"3d28093d-4751-4cac-a926-0ec629262ca6\") " pod="kube-system/coredns-66bc5c9577-ntqqg"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.049034    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/633408d0-ddc3-43e6-8f33-9fa9f394758d-tmp\") pod \"storage-provisioner\" (UID: \"633408d0-ddc3-43e6-8f33-9fa9f394758d\") " pod="kube-system/storage-provisioner"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.049064    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d28093d-4751-4cac-a926-0ec629262ca6-config-volume\") pod \"coredns-66bc5c9577-ntqqg\" (UID: \"3d28093d-4751-4cac-a926-0ec629262ca6\") " pod="kube-system/coredns-66bc5c9577-ntqqg"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.049081    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5c7\" (UniqueName: \"kubernetes.io/projected/633408d0-ddc3-43e6-8f33-9fa9f394758d-kube-api-access-hc5c7\") pod \"storage-provisioner\" (UID: \"633408d0-ddc3-43e6-8f33-9fa9f394758d\") " pod="kube-system/storage-provisioner"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.411097    2336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.411073939 podStartE2EDuration="15.411073939s" podCreationTimestamp="2025-10-16 18:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:28:08.400077658 +0000 UTC m=+20.166652228" watchObservedRunningTime="2025-10-16 18:28:08.411073939 +0000 UTC m=+20.177648486"
	Oct 16 18:28:08 no-preload-808539 kubelet[2336]: I1016 18:28:08.411248    2336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ntqqg" podStartSLOduration=15.411226041 podStartE2EDuration="15.411226041s" podCreationTimestamp="2025-10-16 18:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:28:08.410924809 +0000 UTC m=+20.177499404" watchObservedRunningTime="2025-10-16 18:28:08.411226041 +0000 UTC m=+20.177800623"
	Oct 16 18:28:11 no-preload-808539 kubelet[2336]: I1016 18:28:11.270129    2336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2zh\" (UniqueName: \"kubernetes.io/projected/92e14652-216c-4f68-9dcc-f986c05ef8d4-kube-api-access-bp2zh\") pod \"busybox\" (UID: \"92e14652-216c-4f68-9dcc-f986c05ef8d4\") " pod="default/busybox"
	Oct 16 18:28:13 no-preload-808539 kubelet[2336]: I1016 18:28:13.414674    2336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.038857862 podStartE2EDuration="2.414654624s" podCreationTimestamp="2025-10-16 18:28:11 +0000 UTC" firstStartedPulling="2025-10-16 18:28:11.517379017 +0000 UTC m=+23.283953570" lastFinishedPulling="2025-10-16 18:28:12.893175782 +0000 UTC m=+24.659750332" observedRunningTime="2025-10-16 18:28:13.414367644 +0000 UTC m=+25.180942215" watchObservedRunningTime="2025-10-16 18:28:13.414654624 +0000 UTC m=+25.181229173"
	Oct 16 18:28:19 no-preload-808539 kubelet[2336]: E1016 18:28:19.278431    2336 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57880->127.0.0.1:43841: write tcp 127.0.0.1:57880->127.0.0.1:43841: write: broken pipe
	
	
	==> storage-provisioner [6b7cf9f1394550aaf504647fce24f75624760e166bbabef852de22e6a97846d1] <==
	I1016 18:28:08.371780       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:28:08.383102       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:28:08.383165       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:28:08.386026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:08.392056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:28:08.392211       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:28:08.392411       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-808539_6b438e87-e4d6-4d83-a533-9bd4f9adebca!
	I1016 18:28:08.392786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"403a8f30-1976-4add-8440-a3609b846a31", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-808539_6b438e87-e4d6-4d83-a533-9bd4f9adebca became leader
	W1016 18:28:08.395559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:08.398928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:28:08.493405       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-808539_6b438e87-e4d6-4d83-a533-9bd4f9adebca!
	W1016 18:28:10.402854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:10.408548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:12.411901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:12.417360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:14.420490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:14.424482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:16.428090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:16.432171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:18.435596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:18.439838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:20.442590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:28:20.446909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-808539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-956814 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-956814 --alsologtostderr -v=1: exit status 80 (1.594283711s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-956814 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:28:40.627567  246002 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:28:40.627878  246002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:40.627890  246002 out.go:374] Setting ErrFile to fd 2...
	I1016 18:28:40.627897  246002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:40.628174  246002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:28:40.628443  246002 out.go:368] Setting JSON to false
	I1016 18:28:40.628490  246002 mustload.go:65] Loading cluster: old-k8s-version-956814
	I1016 18:28:40.628855  246002 config.go:182] Loaded profile config "old-k8s-version-956814": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1016 18:28:40.629251  246002 cli_runner.go:164] Run: docker container inspect old-k8s-version-956814 --format={{.State.Status}}
	I1016 18:28:40.648545  246002 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:28:40.648893  246002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:28:40.710987  246002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-16 18:28:40.698839198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:28:40.711638  246002 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-956814 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:28:40.714848  246002 out.go:179] * Pausing node old-k8s-version-956814 ... 
	I1016 18:28:40.716343  246002 host.go:66] Checking if "old-k8s-version-956814" exists ...
	I1016 18:28:40.716637  246002 ssh_runner.go:195] Run: systemctl --version
	I1016 18:28:40.716675  246002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-956814
	I1016 18:28:40.736901  246002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/old-k8s-version-956814/id_rsa Username:docker}
	I1016 18:28:40.836845  246002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:28:40.859256  246002 pause.go:52] kubelet running: true
	I1016 18:28:40.859351  246002 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:28:41.025560  246002 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:28:41.025644  246002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:28:41.094352  246002 cri.go:89] found id: "d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1"
	I1016 18:28:41.094375  246002 cri.go:89] found id: "73533f2caeb1c8ba0bf7613592d2735400b8cced901f45bb29d25fb2ac5be519"
	I1016 18:28:41.094381  246002 cri.go:89] found id: "f85605290552a7127c754cdcd6384894e324c1a39c7ed2c5293fece11354cded"
	I1016 18:28:41.094385  246002 cri.go:89] found id: "1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff"
	I1016 18:28:41.094389  246002 cri.go:89] found id: "995d1735348a9b1f431941e2e8c8991ad732311551103b42bf5418984a4dddf1"
	I1016 18:28:41.094392  246002 cri.go:89] found id: "e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3"
	I1016 18:28:41.094396  246002 cri.go:89] found id: "e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0"
	I1016 18:28:41.094400  246002 cri.go:89] found id: "58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511"
	I1016 18:28:41.094403  246002 cri.go:89] found id: "04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163"
	I1016 18:28:41.094410  246002 cri.go:89] found id: "7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	I1016 18:28:41.094415  246002 cri.go:89] found id: "33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea"
	I1016 18:28:41.094428  246002 cri.go:89] found id: ""
	I1016 18:28:41.094470  246002 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:28:41.106724  246002 retry.go:31] will retry after 291.679308ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:28:41Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:28:41.399282  246002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:28:41.412887  246002 pause.go:52] kubelet running: false
	I1016 18:28:41.412969  246002 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:28:41.571155  246002 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:28:41.571234  246002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:28:41.644890  246002 cri.go:89] found id: "d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1"
	I1016 18:28:41.644910  246002 cri.go:89] found id: "73533f2caeb1c8ba0bf7613592d2735400b8cced901f45bb29d25fb2ac5be519"
	I1016 18:28:41.644916  246002 cri.go:89] found id: "f85605290552a7127c754cdcd6384894e324c1a39c7ed2c5293fece11354cded"
	I1016 18:28:41.644920  246002 cri.go:89] found id: "1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff"
	I1016 18:28:41.644925  246002 cri.go:89] found id: "995d1735348a9b1f431941e2e8c8991ad732311551103b42bf5418984a4dddf1"
	I1016 18:28:41.644930  246002 cri.go:89] found id: "e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3"
	I1016 18:28:41.644934  246002 cri.go:89] found id: "e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0"
	I1016 18:28:41.644938  246002 cri.go:89] found id: "58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511"
	I1016 18:28:41.644943  246002 cri.go:89] found id: "04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163"
	I1016 18:28:41.644960  246002 cri.go:89] found id: "7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	I1016 18:28:41.644968  246002 cri.go:89] found id: "33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea"
	I1016 18:28:41.644972  246002 cri.go:89] found id: ""
	I1016 18:28:41.645015  246002 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:28:41.657100  246002 retry.go:31] will retry after 262.129013ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:28:41Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:28:41.919511  246002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:28:41.933520  246002 pause.go:52] kubelet running: false
	I1016 18:28:41.933584  246002 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:28:42.081339  246002 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:28:42.081406  246002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:28:42.150907  246002 cri.go:89] found id: "d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1"
	I1016 18:28:42.150934  246002 cri.go:89] found id: "73533f2caeb1c8ba0bf7613592d2735400b8cced901f45bb29d25fb2ac5be519"
	I1016 18:28:42.150939  246002 cri.go:89] found id: "f85605290552a7127c754cdcd6384894e324c1a39c7ed2c5293fece11354cded"
	I1016 18:28:42.150944  246002 cri.go:89] found id: "1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff"
	I1016 18:28:42.150947  246002 cri.go:89] found id: "995d1735348a9b1f431941e2e8c8991ad732311551103b42bf5418984a4dddf1"
	I1016 18:28:42.150950  246002 cri.go:89] found id: "e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3"
	I1016 18:28:42.150952  246002 cri.go:89] found id: "e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0"
	I1016 18:28:42.150955  246002 cri.go:89] found id: "58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511"
	I1016 18:28:42.150957  246002 cri.go:89] found id: "04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163"
	I1016 18:28:42.150962  246002 cri.go:89] found id: "7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	I1016 18:28:42.150964  246002 cri.go:89] found id: "33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea"
	I1016 18:28:42.150966  246002 cri.go:89] found id: ""
	I1016 18:28:42.151012  246002 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:28:42.166847  246002 out.go:203] 
	W1016 18:28:42.168464  246002 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:28:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:28:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:28:42.168483  246002 out.go:285] * 
	* 
	W1016 18:28:42.172492  246002 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:28:42.173912  246002 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-956814 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-956814
helpers_test.go:243: (dbg) docker inspect old-k8s-version-956814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	        "Created": "2025-10-16T18:26:24.391336039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:27:37.937839611Z",
	            "FinishedAt": "2025-10-16T18:27:35.867412237Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hosts",
	        "LogPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d-json.log",
	        "Name": "/old-k8s-version-956814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-956814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-956814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	                "LowerDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-956814",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-956814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-956814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1da086b8b6a8703c2f69b4a059f1b9b9b6b94614e3fd18b9aa22e72ad625a1c",
	            "SandboxKey": "/var/run/docker/netns/b1da086b8b6a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-956814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:0a:38:d4:04:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1d700daadff6f62e8b6f47bfafd5296def1ddd0bdc304135db2dbcfd26dcae3",
	                    "EndpointID": "8f040f554dc7a93506e65627d6c34d2c600ac55054b60826a98c1266be5ac301",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-956814",
	                        "2fe013b2be52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814: exit status 2 (318.925489ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25: (1.199520541s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ stop    │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p NoKubernetes-200573 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ cert-options-817096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p cert-options-817096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p cert-options-817096                                                                                                                                                                                                                        │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:28:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:28:37.991966  245371 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:28:37.992240  245371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:37.992250  245371 out.go:374] Setting ErrFile to fd 2...
	I1016 18:28:37.992255  245371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:37.992459  245371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:28:37.992912  245371 out.go:368] Setting JSON to false
	I1016 18:28:37.994083  245371 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4266,"bootTime":1760635052,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:28:37.994167  245371 start.go:141] virtualization: kvm guest
	I1016 18:28:37.996132  245371 out.go:179] * [no-preload-808539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:28:37.997732  245371 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:28:37.997709  245371 notify.go:220] Checking for updates...
	I1016 18:28:38.000692  245371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:28:38.002094  245371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:28:38.003631  245371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:28:38.005001  245371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:28:38.006190  245371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:28:38.007972  245371 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:28:38.008476  245371 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:28:38.032864  245371 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:28:38.033031  245371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:28:38.096441  245371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:28:38.086142216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:28:38.096536  245371 docker.go:318] overlay module found
	I1016 18:28:38.098473  245371 out.go:179] * Using the docker driver based on existing profile
	I1016 18:28:38.099884  245371 start.go:305] selected driver: docker
	I1016 18:28:38.099903  245371 start.go:925] validating driver "docker" against &{Name:no-preload-808539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-808539 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:28:38.100010  245371 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:28:38.100577  245371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:28:38.161880  245371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:28:38.151406459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:28:38.162175  245371 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:28:38.162205  245371 cni.go:84] Creating CNI manager for ""
	I1016 18:28:38.162287  245371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:28:38.162412  245371 start.go:349] cluster config:
	{Name:no-preload-808539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-808539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:28:38.164449  245371 out.go:179] * Starting "no-preload-808539" primary control-plane node in "no-preload-808539" cluster
	I1016 18:28:38.165978  245371 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:28:38.167510  245371 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:28:38.169442  245371 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:28:38.169569  245371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/config.json ...
	I1016 18:28:38.169629  245371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:28:38.169785  245371 cache.go:107] acquiring lock: {Name:mk5095dd253ada6cafdffc052cfcc257d35f8e95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169800  245371 cache.go:107] acquiring lock: {Name:mkbcd9b654a7057c82e0f6752d5bb958a319293f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169810  245371 cache.go:107] acquiring lock: {Name:mk25f929ce426bc310e7d6bf6b0485f85ae3fc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169893  245371 cache.go:107] acquiring lock: {Name:mk02856ef96e945ad0f6fa12bc0ae14c8e74b73c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169883  245371 cache.go:107] acquiring lock: {Name:mk2a98b8955e77e7ad697f780a1316dd6a72d459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169919  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1016 18:28:38.169922  245371 cache.go:107] acquiring lock: {Name:mkc8e10ace1dca34792d414ef2608cebceab3283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169948  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1016 18:28:38.169933  245371 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 162.011µs
	I1016 18:28:38.169957  245371 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 67.864µs
	I1016 18:28:38.169962  245371 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1016 18:28:38.169965  245371 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1016 18:28:38.169906  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1016 18:28:38.169977  245371 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 172.869µs
	I1016 18:28:38.169985  245371 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1016 18:28:38.169908  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1016 18:28:38.169928  245371 cache.go:107] acquiring lock: {Name:mkbcd88ff5fbd00dc50ebaef836356d50a346a87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169953  245371 cache.go:107] acquiring lock: {Name:mk2d0ecf42d18c2f5e11041bbb92e9dbf939117f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169997  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1016 18:28:38.170016  245371 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 96.722µs
	I1016 18:28:38.170023  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1016 18:28:38.170026  245371 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1016 18:28:38.169995  245371 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 212.204µs
	I1016 18:28:38.170033  245371 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 210.096µs
	I1016 18:28:38.170039  245371 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1016 18:28:38.170042  245371 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1016 18:28:38.170106  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1016 18:28:38.170120  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1016 18:28:38.170119  245371 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 250.613µs
	I1016 18:28:38.170131  245371 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1016 18:28:38.170130  245371 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 229.166µs
	I1016 18:28:38.170138  245371 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1016 18:28:38.170148  245371 cache.go:87] Successfully saved all images to host disk.
	I1016 18:28:38.192003  245371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:28:38.192024  245371 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:28:38.192040  245371 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:28:38.192063  245371 start.go:360] acquireMachinesLock for no-preload-808539: {Name:mkbdf01df0299f6fe6490392002a6bcdc04d7f6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.192128  245371 start.go:364] duration metric: took 43.911µs to acquireMachinesLock for "no-preload-808539"
	I1016 18:28:38.192149  245371 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:28:38.192158  245371 fix.go:54] fixHost starting: 
	I1016 18:28:38.192461  245371 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:28:38.211488  245371 fix.go:112] recreateIfNeeded on no-preload-808539: state=Stopped err=<nil>
	W1016 18:28:38.211517  245371 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:28:36.178776  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	
	
	==> CRI-O <==
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.576311307Z" level=info msg="Created container 33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2/kubernetes-dashboard" id=461c6ca6-cd31-4e13-b595-27c05f1e0aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.576986957Z" level=info msg="Starting container: 33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea" id=9c8f497a-e9ef-4aad-bcb5-0ec9220cfd73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.578975063Z" level=info msg="Started container" PID=1718 containerID=33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2/kubernetes-dashboard id=9c8f497a-e9ef-4aad-bcb5-0ec9220cfd73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9e1183802a11619c9e268a658594b7b8ca7c43979d9927520f9cb362833adce
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.461811564Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59c84602-cbb2-41b9-9ddd-7ac5f9076218 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.462742987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c0056e0e-f549-4e31-9b58-d4b115628b63 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.463766049Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ff9443bc-a5d8-4b46-8ad2-768c7ae1eabf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.464048318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468217169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468407616Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ac52c498a56bea214999021f37c1346e95a9c52fde97a5ef5d14ca8b9ee1bb51/merged/etc/passwd: no such file or directory"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.46843894Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ac52c498a56bea214999021f37c1346e95a9c52fde97a5ef5d14ca8b9ee1bb51/merged/etc/group: no such file or directory"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468786762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.491736681Z" level=info msg="Created container d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1: kube-system/storage-provisioner/storage-provisioner" id=ff9443bc-a5d8-4b46-8ad2-768c7ae1eabf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.49227447Z" level=info msg="Starting container: d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1" id=85c9d714-1a1a-412e-a87d-a7e13e846a7d name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.493875845Z" level=info msg="Started container" PID=1742 containerID=d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1 description=kube-system/storage-provisioner/storage-provisioner id=85c9d714-1a1a-412e-a87d-a7e13e846a7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7dc3aa9646da09fc80eb1eb51e8fa5cf371f80cd1442a682c7181ff3951d897
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.350115707Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33cc697d-e791-4150-b4a0-60cac56ee339 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.351147799Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=454dc11c-520e-4d62-ae33-1a644f82220a name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.352056764Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=fbf86681-c11a-4232-b929-2be3c4e645c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.352353433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.359139473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.359571704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.393413515Z" level=info msg="Created container 7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=fbf86681-c11a-4232-b929-2be3c4e645c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.394156191Z" level=info msg="Starting container: 7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63" id=c69656a7-dba6-4239-9132-3f8b6882c0fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.396463102Z" level=info msg="Started container" PID=1758 containerID=7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper id=c69656a7-dba6-4239-9132-3f8b6882c0fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=793a7b1ebcbe4cd5c3f13fc5d430ebb76c8e676903ece6e7443d36f40ec33e3b
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.479709467Z" level=info msg="Removing container: 8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95" id=6c8df97e-47f8-4547-a357-96168fa8e935 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.489399838Z" level=info msg="Removed container 8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=6c8df97e-47f8-4547-a357-96168fa8e935 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	7648e093dcf55       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   793a7b1ebcbe4       dashboard-metrics-scraper-5f989dc9cf-hfwf9       kubernetes-dashboard
	d0c68f3c4b250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   e7dc3aa9646da       storage-provisioner                              kube-system
	33d7dc72b038f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   b9e1183802a11       kubernetes-dashboard-8694d4445c-v4mf2            kubernetes-dashboard
	73533f2caeb1c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   1b281ab740139       coredns-5dd5756b68-kdcm7                         kube-system
	f85605290552a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   1f35f2b705b2a       kindnet-94l8q                                    kube-system
	308aac48e5218       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   0ac990010ee29       busybox                                          default
	1bef49206beb8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   e7dc3aa9646da       storage-provisioner                              kube-system
	995d1735348a9       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   ab717c8086eb2       kube-proxy-nkwcm                                 kube-system
	e6e794e317e67       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   e032a3f9fb1df       kube-apiserver-old-k8s-version-956814            kube-system
	e255d27c3903c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   e3dba99dc4aa0       etcd-old-k8s-version-956814                      kube-system
	58a737ae76bdf       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   d2d6991ef6a08       kube-controller-manager-old-k8s-version-956814   kube-system
	04c714a2b0c86       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   25238640272ce       kube-scheduler-old-k8s-version-956814            kube-system
	
	
	==> coredns [73533f2caeb1c8ba0bf7613592d2735400b8cced901f45bb29d25fb2ac5be519] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54403 - 9834 "HINFO IN 3006739082644964322.902435482909859467. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.49418236s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-956814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-956814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-956814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:26:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-956814
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:28:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:27:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-956814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                16c7f49b-fe0a-4b26-a8a7-b5d233753b17
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-kdcm7                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-956814                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-94l8q                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-956814             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-956814    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-nkwcm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-956814             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hfwf9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v4mf2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-956814 event: Registered Node old-k8s-version-956814 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-956814 status is now: NodeReady
	  Normal  Starting                 59s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-956814 event: Registered Node old-k8s-version-956814 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0] <==
	{"level":"info","ts":"2025-10-16T18:27:44.942529Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T18:27:44.942974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-16T18:27:44.943062Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-16T18:27:44.94317Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-16T18:27:44.943358Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:27:44.943423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:27:44.944506Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T18:27:44.944814Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T18:27:44.944886Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T18:27:44.945369Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:27:44.945426Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:27:45.732832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.732897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.732946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.73297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.73298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.732992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.733003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.734051Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-956814 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T18:27:45.735192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:27:45.736556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T18:27:45.73658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T18:27:45.735302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:27:45.736903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-16T18:27:45.74099Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 18:28:43 up  1:11,  0 user,  load average: 1.81, 2.32, 1.57
	Linux old-k8s-version-956814 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f85605290552a7127c754cdcd6384894e324c1a39c7ed2c5293fece11354cded] <==
	I1016 18:27:47.957505       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:27:47.957807       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:27:47.957990       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:27:47.958009       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:27:47.958040       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:27:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:27:48.235080       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:27:48.235141       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:27:48.235154       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:27:48.255319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:27:48.555894       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:27:48.555927       1 metrics.go:72] Registering metrics
	I1016 18:27:48.556001       1 controller.go:711] "Syncing nftables rules"
	I1016 18:27:58.235637       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:27:58.235671       1 main.go:301] handling current node
	I1016 18:28:08.235809       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:08.235853       1 main.go:301] handling current node
	I1016 18:28:18.235053       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:18.235095       1 main.go:301] handling current node
	I1016 18:28:28.236009       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:28.236040       1 main.go:301] handling current node
	I1016 18:28:38.242828       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:38.242866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3] <==
	I1016 18:27:46.814575       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:27:46.821650       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 18:27:46.821662       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:27:46.821709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 18:27:46.822780       1 shared_informer.go:318] Caches are synced for configmaps
	I1016 18:27:46.823870       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1016 18:27:46.823885       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1016 18:27:46.823888       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1016 18:27:46.823919       1 aggregator.go:166] initial CRD sync complete...
	I1016 18:27:46.823926       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 18:27:46.823939       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:27:46.823947       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:27:46.844759       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1016 18:27:47.730251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:27:47.818081       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 18:27:47.853743       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 18:27:47.875838       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:27:47.883194       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:27:47.891688       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 18:27:47.928626       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.19.146"}
	I1016 18:27:47.941256       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.93.160"}
	I1016 18:27:59.073402       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 18:27:59.180979       1 controller.go:624] quota admission added evaluator for: endpoints
	I1016 18:27:59.271635       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:27:59.271635       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511] <==
	I1016 18:27:59.095031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.707793ms"
	I1016 18:27:59.101467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.354673ms"
	I1016 18:27:59.101834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.704µs"
	I1016 18:27:59.105123       1 shared_informer.go:318] Caches are synced for PV protection
	I1016 18:27:59.105572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.509592ms"
	I1016 18:27:59.105663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.525µs"
	I1016 18:27:59.105775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.248µs"
	I1016 18:27:59.114464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.716µs"
	I1016 18:27:59.131990       1 shared_informer.go:318] Caches are synced for endpoint
	I1016 18:27:59.149546       1 shared_informer.go:318] Caches are synced for resource quota
	I1016 18:27:59.182697       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1016 18:27:59.185213       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1016 18:27:59.193342       1 shared_informer.go:318] Caches are synced for resource quota
	I1016 18:27:59.508464       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:27:59.580933       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:27:59.580963       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 18:28:02.429583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.978µs"
	I1016 18:28:03.436666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.599µs"
	I1016 18:28:04.438750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.254µs"
	I1016 18:28:06.448766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.911595ms"
	I1016 18:28:06.449016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="94.665µs"
	I1016 18:28:24.489291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.277µs"
	I1016 18:28:27.396120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.955815ms"
	I1016 18:28:27.396217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.817µs"
	I1016 18:28:29.401365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.825µs"
	
	
	==> kube-proxy [995d1735348a9b1f431941e2e8c8991ad732311551103b42bf5418984a4dddf1] <==
	I1016 18:27:47.824165       1 server_others.go:69] "Using iptables proxy"
	I1016 18:27:47.833923       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1016 18:27:47.856255       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:27:47.860003       1 server_others.go:152] "Using iptables Proxier"
	I1016 18:27:47.860081       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 18:27:47.860090       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 18:27:47.860123       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 18:27:47.860404       1 server.go:846] "Version info" version="v1.28.0"
	I1016 18:27:47.860425       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:27:47.861706       1 config.go:188] "Starting service config controller"
	I1016 18:27:47.861778       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 18:27:47.861780       1 config.go:315] "Starting node config controller"
	I1016 18:27:47.861797       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 18:27:47.861788       1 config.go:97] "Starting endpoint slice config controller"
	I1016 18:27:47.861827       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 18:27:47.962570       1 shared_informer.go:318] Caches are synced for node config
	I1016 18:27:47.962607       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1016 18:27:47.962612       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163] <==
	W1016 18:27:46.810267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.810282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1016 18:27:46.810364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1016 18:27:46.810388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1016 18:27:46.810402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1016 18:27:46.810451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1016 18:27:46.810469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1016 18:27:46.810524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1016 18:27:46.811055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1016 18:27:46.811011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1016 18:27:46.811133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1016 18:27:46.810579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.811204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.811266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1016 18:27:46.811383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1016 18:27:46.810797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1016 18:27:46.811428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1016 18:27:46.810530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1016 18:27:46.811462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1016 18:27:46.811332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1016 18:27:48.003210       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.092686     716 topology_manager.go:215] "Topology Admit Handler" podUID="30ae3852-d8ac-427d-8da1-8439a752e2d4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154828     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/30ae3852-d8ac-427d-8da1-8439a752e2d4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v4mf2\" (UID: \"30ae3852-d8ac-427d-8da1-8439a752e2d4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154889     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6769c2f0-74d8-4506-988d-c94ce7816b66-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-hfwf9\" (UID: \"6769c2f0-74d8-4506-988d-c94ce7816b66\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154928     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7488c\" (UniqueName: \"kubernetes.io/projected/6769c2f0-74d8-4506-988d-c94ce7816b66-kube-api-access-7488c\") pod \"dashboard-metrics-scraper-5f989dc9cf-hfwf9\" (UID: \"6769c2f0-74d8-4506-988d-c94ce7816b66\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154980     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdzcp\" (UniqueName: \"kubernetes.io/projected/30ae3852-d8ac-427d-8da1-8439a752e2d4-kube-api-access-xdzcp\") pod \"kubernetes-dashboard-8694d4445c-v4mf2\" (UID: \"30ae3852-d8ac-427d-8da1-8439a752e2d4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:28:02 old-k8s-version-956814 kubelet[716]: I1016 18:28:02.416950     716 scope.go:117] "RemoveContainer" containerID="69080c51b97cd78b8ed0eb4f16d0eeec9aa89158f853e0fe3b2ded11ee015e31"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: I1016 18:28:03.422374     716 scope.go:117] "RemoveContainer" containerID="69080c51b97cd78b8ed0eb4f16d0eeec9aa89158f853e0fe3b2ded11ee015e31"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: I1016 18:28:03.422645     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: E1016 18:28:03.423017     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:04 old-k8s-version-956814 kubelet[716]: I1016 18:28:04.426863     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:04 old-k8s-version-956814 kubelet[716]: E1016 18:28:04.427220     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:06 old-k8s-version-956814 kubelet[716]: I1016 18:28:06.441834     716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2" podStartSLOduration=1.314410847 podCreationTimestamp="2025-10-16 18:27:59 +0000 UTC" firstStartedPulling="2025-10-16 18:27:59.41615197 +0000 UTC m=+15.179465150" lastFinishedPulling="2025-10-16 18:28:05.543514188 +0000 UTC m=+21.306827368" observedRunningTime="2025-10-16 18:28:06.441682488 +0000 UTC m=+22.204995689" watchObservedRunningTime="2025-10-16 18:28:06.441773065 +0000 UTC m=+22.205086264"
	Oct 16 18:28:09 old-k8s-version-956814 kubelet[716]: I1016 18:28:09.391995     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:09 old-k8s-version-956814 kubelet[716]: E1016 18:28:09.392327     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:18 old-k8s-version-956814 kubelet[716]: I1016 18:28:18.461416     716 scope.go:117] "RemoveContainer" containerID="1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.349432     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.478537     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.478863     716 scope.go:117] "RemoveContainer" containerID="7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: E1016 18:28:24.479235     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:29 old-k8s-version-956814 kubelet[716]: I1016 18:28:29.391686     716 scope.go:117] "RemoveContainer" containerID="7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	Oct 16 18:28:29 old-k8s-version-956814 kubelet[716]: E1016 18:28:29.392117     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: kubelet.service: Consumed 1.561s CPU time.
	
	
	==> kubernetes-dashboard [33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea] <==
	2025/10/16 18:28:05 Starting overwatch
	2025/10/16 18:28:05 Using namespace: kubernetes-dashboard
	2025/10/16 18:28:05 Using in-cluster config to connect to apiserver
	2025/10/16 18:28:05 Using secret token for csrf signing
	2025/10/16 18:28:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:28:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:28:05 Successful initial request to the apiserver, version: v1.28.0
	2025/10/16 18:28:05 Generating JWE encryption key
	2025/10/16 18:28:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:28:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:28:05 Initializing JWE encryption key from synchronized object
	2025/10/16 18:28:05 Creating in-cluster Sidecar client
	2025/10/16 18:28:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:05 Serving insecurely on HTTP port: 9090
	2025/10/16 18:28:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff] <==
	I1016 18:27:47.770432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:28:17.773789       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1] <==
	I1016 18:28:18.506100       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:28:18.514467       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:28:18.514521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 18:28:35.910506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:28:35.910587       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e1593ca-3024-4a18-b57d-738a19d42c4d", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00 became leader
	I1016 18:28:35.910669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00!
	I1016 18:28:36.011494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956814 -n old-k8s-version-956814: exit status 2 (330.093804ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-956814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-956814
helpers_test.go:243: (dbg) docker inspect old-k8s-version-956814:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	        "Created": "2025-10-16T18:26:24.391336039Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 239007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:27:37.937839611Z",
	            "FinishedAt": "2025-10-16T18:27:35.867412237Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hostname",
	        "HostsPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/hosts",
	        "LogPath": "/var/lib/docker/containers/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d/2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d-json.log",
	        "Name": "/old-k8s-version-956814",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-956814:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-956814",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2fe013b2be52d205bc9b036fb2f1afb53d8c7b766fd28a69695e2170208e9f3d",
	                "LowerDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6dc9fe3850741937f409c4be942acfc27b5b90ea6a67e2a0b6209b82f9ab1b71/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-956814",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-956814/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-956814",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-956814",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b1da086b8b6a8703c2f69b4a059f1b9b9b6b94614e3fd18b9aa22e72ad625a1c",
	            "SandboxKey": "/var/run/docker/netns/b1da086b8b6a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-956814": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:0a:38:d4:04:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d1d700daadff6f62e8b6f47bfafd5296def1ddd0bdc304135db2dbcfd26dcae3",
	                    "EndpointID": "8f040f554dc7a93506e65627d6c34d2c600ac55054b60826a98c1266be5ac301",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-956814",
	                        "2fe013b2be52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814: exit status 2 (332.413046ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-956814 logs -n 25: (1.30934156s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ stop    │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p NoKubernetes-200573 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ cert-options-817096 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p cert-options-817096 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ delete  │ -p cert-options-817096                                                                                                                                                                                                                        │ cert-options-817096       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ ssh     │ -p NoKubernetes-200573 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p NoKubernetes-200573                                                                                                                                                                                                                        │ NoKubernetes-200573       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025 │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539         │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814    │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:28:37
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:28:37.991966  245371 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:28:37.992240  245371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:37.992250  245371 out.go:374] Setting ErrFile to fd 2...
	I1016 18:28:37.992255  245371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:28:37.992459  245371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:28:37.992912  245371 out.go:368] Setting JSON to false
	I1016 18:28:37.994083  245371 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4266,"bootTime":1760635052,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:28:37.994167  245371 start.go:141] virtualization: kvm guest
	I1016 18:28:37.996132  245371 out.go:179] * [no-preload-808539] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:28:37.997732  245371 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:28:37.997709  245371 notify.go:220] Checking for updates...
	I1016 18:28:38.000692  245371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:28:38.002094  245371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:28:38.003631  245371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:28:38.005001  245371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:28:38.006190  245371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:28:38.007972  245371 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:28:38.008476  245371 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:28:38.032864  245371 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:28:38.033031  245371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:28:38.096441  245371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:28:38.086142216 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:28:38.096536  245371 docker.go:318] overlay module found
	I1016 18:28:38.098473  245371 out.go:179] * Using the docker driver based on existing profile
	I1016 18:28:38.099884  245371 start.go:305] selected driver: docker
	I1016 18:28:38.099903  245371 start.go:925] validating driver "docker" against &{Name:no-preload-808539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-808539 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:28:38.100010  245371 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:28:38.100577  245371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:28:38.161880  245371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:28:38.151406459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:28:38.162175  245371 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:28:38.162205  245371 cni.go:84] Creating CNI manager for ""
	I1016 18:28:38.162287  245371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:28:38.162412  245371 start.go:349] cluster config:
	{Name:no-preload-808539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-808539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:28:38.164449  245371 out.go:179] * Starting "no-preload-808539" primary control-plane node in "no-preload-808539" cluster
	I1016 18:28:38.165978  245371 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:28:38.167510  245371 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:28:38.169442  245371 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:28:38.169569  245371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/config.json ...
	I1016 18:28:38.169629  245371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:28:38.169785  245371 cache.go:107] acquiring lock: {Name:mk5095dd253ada6cafdffc052cfcc257d35f8e95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169800  245371 cache.go:107] acquiring lock: {Name:mkbcd9b654a7057c82e0f6752d5bb958a319293f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169810  245371 cache.go:107] acquiring lock: {Name:mk25f929ce426bc310e7d6bf6b0485f85ae3fc1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169893  245371 cache.go:107] acquiring lock: {Name:mk02856ef96e945ad0f6fa12bc0ae14c8e74b73c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169883  245371 cache.go:107] acquiring lock: {Name:mk2a98b8955e77e7ad697f780a1316dd6a72d459 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169919  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1016 18:28:38.169922  245371 cache.go:107] acquiring lock: {Name:mkc8e10ace1dca34792d414ef2608cebceab3283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169948  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1016 18:28:38.169933  245371 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 162.011µs
	I1016 18:28:38.169957  245371 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 67.864µs
	I1016 18:28:38.169962  245371 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1016 18:28:38.169965  245371 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1016 18:28:38.169906  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1016 18:28:38.169977  245371 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 172.869µs
	I1016 18:28:38.169985  245371 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1016 18:28:38.169908  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1016 18:28:38.169928  245371 cache.go:107] acquiring lock: {Name:mkbcd88ff5fbd00dc50ebaef836356d50a346a87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169953  245371 cache.go:107] acquiring lock: {Name:mk2d0ecf42d18c2f5e11041bbb92e9dbf939117f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.169997  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1016 18:28:38.170016  245371 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 96.722µs
	I1016 18:28:38.170023  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1016 18:28:38.170026  245371 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1016 18:28:38.169995  245371 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 212.204µs
	I1016 18:28:38.170033  245371 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 210.096µs
	I1016 18:28:38.170039  245371 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1016 18:28:38.170042  245371 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1016 18:28:38.170106  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1016 18:28:38.170120  245371 cache.go:115] /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1016 18:28:38.170119  245371 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 250.613µs
	I1016 18:28:38.170131  245371 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1016 18:28:38.170130  245371 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 229.166µs
	I1016 18:28:38.170138  245371 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21738-8849/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1016 18:28:38.170148  245371 cache.go:87] Successfully saved all images to host disk.
	I1016 18:28:38.192003  245371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:28:38.192024  245371 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:28:38.192040  245371 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:28:38.192063  245371 start.go:360] acquireMachinesLock for no-preload-808539: {Name:mkbdf01df0299f6fe6490392002a6bcdc04d7f6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:28:38.192128  245371 start.go:364] duration metric: took 43.911µs to acquireMachinesLock for "no-preload-808539"
	I1016 18:28:38.192149  245371 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:28:38.192158  245371 fix.go:54] fixHost starting: 
	I1016 18:28:38.192461  245371 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:28:38.211488  245371 fix.go:112] recreateIfNeeded on no-preload-808539: state=Stopped err=<nil>
	W1016 18:28:38.211517  245371 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:28:36.178776  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:28:38.213912  245371 out.go:252] * Restarting existing docker container for "no-preload-808539" ...
	I1016 18:28:38.213979  245371 cli_runner.go:164] Run: docker start no-preload-808539
	I1016 18:28:38.473078  245371 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:28:38.493247  245371 kic.go:430] container "no-preload-808539" state is running.
	I1016 18:28:38.493625  245371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-808539
	I1016 18:28:38.513211  245371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/config.json ...
	I1016 18:28:38.513502  245371 machine.go:93] provisionDockerMachine start ...
	I1016 18:28:38.513590  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:38.534023  245371 main.go:141] libmachine: Using SSH client type: native
	I1016 18:28:38.534287  245371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1016 18:28:38.534306  245371 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:28:38.534851  245371 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46780->127.0.0.1:33068: read: connection reset by peer
	I1016 18:28:41.683516  245371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-808539
	
	I1016 18:28:41.683542  245371 ubuntu.go:182] provisioning hostname "no-preload-808539"
	I1016 18:28:41.683605  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:41.703499  245371 main.go:141] libmachine: Using SSH client type: native
	I1016 18:28:41.703726  245371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1016 18:28:41.703742  245371 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-808539 && echo "no-preload-808539" | sudo tee /etc/hostname
	I1016 18:28:41.850054  245371 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-808539
	
	I1016 18:28:41.850127  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:41.869351  245371 main.go:141] libmachine: Using SSH client type: native
	I1016 18:28:41.869619  245371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1016 18:28:41.869643  245371 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-808539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-808539/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-808539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:28:42.005525  245371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:28:42.005554  245371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:28:42.005579  245371 ubuntu.go:190] setting up certificates
	I1016 18:28:42.005590  245371 provision.go:84] configureAuth start
	I1016 18:28:42.005664  245371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-808539
	I1016 18:28:42.024116  245371 provision.go:143] copyHostCerts
	I1016 18:28:42.024183  245371 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:28:42.024199  245371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:28:42.024266  245371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:28:42.024381  245371 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:28:42.024390  245371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:28:42.024420  245371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:28:42.024494  245371 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:28:42.024501  245371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:28:42.024524  245371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:28:42.024587  245371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.no-preload-808539 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-808539]
	I1016 18:28:42.249353  245371 provision.go:177] copyRemoteCerts
	I1016 18:28:42.249399  245371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:28:42.249437  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:42.272212  245371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:28:42.375482  245371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:28:42.393915  245371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 18:28:42.412472  245371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:28:42.430920  245371 provision.go:87] duration metric: took 425.315852ms to configureAuth
	I1016 18:28:42.430950  245371 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:28:42.431214  245371 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:28:42.431370  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:42.455356  245371 main.go:141] libmachine: Using SSH client type: native
	I1016 18:28:42.455739  245371 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1016 18:28:42.455772  245371 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:28:42.777902  245371 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:28:42.777931  245371 machine.go:96] duration metric: took 4.264410044s to provisionDockerMachine
	I1016 18:28:42.777947  245371 start.go:293] postStartSetup for "no-preload-808539" (driver="docker")
	I1016 18:28:42.777960  245371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:28:42.778042  245371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:28:42.778104  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:42.799766  245371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:28:42.905583  245371 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:28:42.909529  245371 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:28:42.909559  245371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:28:42.909572  245371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:28:42.909621  245371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:28:42.909742  245371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:28:42.909861  245371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:28:42.918379  245371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:28:42.937615  245371 start.go:296] duration metric: took 159.654518ms for postStartSetup
	I1016 18:28:42.937703  245371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:28:42.937869  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:42.959518  245371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:28:41.180470  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:28:41.180539  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:28:41.180609  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:28:41.209113  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:28:41.209139  228782 cri.go:89] found id: "cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:41.209146  228782 cri.go:89] found id: ""
	I1016 18:28:41.209156  228782 logs.go:282] 2 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674]
	I1016 18:28:41.209218  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:41.213381  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:41.217529  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:28:41.217596  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:28:41.244793  228782 cri.go:89] found id: ""
	I1016 18:28:41.244819  228782 logs.go:282] 0 containers: []
	W1016 18:28:41.244830  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:28:41.244837  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:28:41.244894  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:28:41.272923  228782 cri.go:89] found id: ""
	I1016 18:28:41.272946  228782 logs.go:282] 0 containers: []
	W1016 18:28:41.272953  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:28:41.272959  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:28:41.273011  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:28:41.300948  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:28:41.300975  228782 cri.go:89] found id: ""
	I1016 18:28:41.300985  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:28:41.301049  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:41.305312  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:28:41.305379  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:28:41.332466  228782 cri.go:89] found id: ""
	I1016 18:28:41.332495  228782 logs.go:282] 0 containers: []
	W1016 18:28:41.332505  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:28:41.332512  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:28:41.332580  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:28:41.360765  228782 cri.go:89] found id: "d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:41.360792  228782 cri.go:89] found id: ""
	I1016 18:28:41.360799  228782 logs.go:282] 1 containers: [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929]
	I1016 18:28:41.360845  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:28:41.364945  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:28:41.365030  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:28:41.391944  228782 cri.go:89] found id: ""
	I1016 18:28:41.391967  228782 logs.go:282] 0 containers: []
	W1016 18:28:41.391977  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:28:41.391983  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:28:41.392058  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:28:41.420057  228782 cri.go:89] found id: ""
	I1016 18:28:41.420083  228782 logs.go:282] 0 containers: []
	W1016 18:28:41.420094  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:28:41.420118  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:28:41.420137  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:28:41.458353  228782 logs.go:123] Gathering logs for kube-apiserver [cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674] ...
	I1016 18:28:41.458393  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cc097aa6deaae2796a29ce872ed20155a22253a5bc0ad1b581600d8522e28674"
	I1016 18:28:41.494574  228782 logs.go:123] Gathering logs for kube-controller-manager [d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929] ...
	I1016 18:28:41.494603  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d4b57a3502815be7d56599ccff60e46c347c28c3c24364506c3b936e45cf1929"
	I1016 18:28:41.521904  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:28:41.521928  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:28:41.556710  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:28:41.556762  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:28:41.642923  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:28:41.642970  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:28:41.658238  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:28:41.658261  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1016 18:28:43.058254  245371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:28:43.064302  245371 fix.go:56] duration metric: took 4.872138002s for fixHost
	I1016 18:28:43.064331  245371 start.go:83] releasing machines lock for "no-preload-808539", held for 4.872189456s
	I1016 18:28:43.064416  245371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-808539
	I1016 18:28:43.085555  245371 ssh_runner.go:195] Run: cat /version.json
	I1016 18:28:43.085612  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:43.085660  245371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:28:43.085748  245371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:28:43.107526  245371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:28:43.109774  245371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:28:43.204618  245371 ssh_runner.go:195] Run: systemctl --version
	I1016 18:28:43.263469  245371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:28:43.304830  245371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:28:43.310491  245371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:28:43.310548  245371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:28:43.319668  245371 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:28:43.319694  245371 start.go:495] detecting cgroup driver to use...
	I1016 18:28:43.319756  245371 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:28:43.319808  245371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:28:43.335974  245371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:28:43.351377  245371 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:28:43.351435  245371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:28:43.368838  245371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:28:43.383905  245371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:28:43.480311  245371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:28:43.573860  245371 docker.go:234] disabling docker service ...
	I1016 18:28:43.573932  245371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:28:43.590998  245371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:28:43.604675  245371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:28:43.693575  245371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:28:43.790267  245371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:28:43.806045  245371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:28:43.823867  245371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:28:43.823927  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.834573  245371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:28:43.834638  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.845378  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.856426  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.865791  245371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:28:43.874087  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.883678  245371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.893193  245371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:28:43.902392  245371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:28:43.910281  245371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:28:43.917785  245371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:28:44.012424  245371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:28:44.131613  245371 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:28:44.131679  245371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:28:44.136039  245371 start.go:563] Will wait 60s for crictl version
	I1016 18:28:44.136102  245371 ssh_runner.go:195] Run: which crictl
	I1016 18:28:44.140157  245371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:28:44.169329  245371 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:28:44.169413  245371 ssh_runner.go:195] Run: crio --version
	I1016 18:28:44.200116  245371 ssh_runner.go:195] Run: crio --version
	I1016 18:28:44.235497  245371 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.576311307Z" level=info msg="Created container 33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2/kubernetes-dashboard" id=461c6ca6-cd31-4e13-b595-27c05f1e0aa1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.576986957Z" level=info msg="Starting container: 33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea" id=9c8f497a-e9ef-4aad-bcb5-0ec9220cfd73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:05 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:05.578975063Z" level=info msg="Started container" PID=1718 containerID=33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2/kubernetes-dashboard id=9c8f497a-e9ef-4aad-bcb5-0ec9220cfd73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9e1183802a11619c9e268a658594b7b8ca7c43979d9927520f9cb362833adce
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.461811564Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=59c84602-cbb2-41b9-9ddd-7ac5f9076218 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.462742987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c0056e0e-f549-4e31-9b58-d4b115628b63 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.463766049Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ff9443bc-a5d8-4b46-8ad2-768c7ae1eabf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.464048318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468217169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468407616Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ac52c498a56bea214999021f37c1346e95a9c52fde97a5ef5d14ca8b9ee1bb51/merged/etc/passwd: no such file or directory"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.46843894Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ac52c498a56bea214999021f37c1346e95a9c52fde97a5ef5d14ca8b9ee1bb51/merged/etc/group: no such file or directory"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.468786762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.491736681Z" level=info msg="Created container d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1: kube-system/storage-provisioner/storage-provisioner" id=ff9443bc-a5d8-4b46-8ad2-768c7ae1eabf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.49227447Z" level=info msg="Starting container: d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1" id=85c9d714-1a1a-412e-a87d-a7e13e846a7d name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:18 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:18.493875845Z" level=info msg="Started container" PID=1742 containerID=d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1 description=kube-system/storage-provisioner/storage-provisioner id=85c9d714-1a1a-412e-a87d-a7e13e846a7d name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7dc3aa9646da09fc80eb1eb51e8fa5cf371f80cd1442a682c7181ff3951d897
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.350115707Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=33cc697d-e791-4150-b4a0-60cac56ee339 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.351147799Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=454dc11c-520e-4d62-ae33-1a644f82220a name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.352056764Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=fbf86681-c11a-4232-b929-2be3c4e645c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.352353433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.359139473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.359571704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.393413515Z" level=info msg="Created container 7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=fbf86681-c11a-4232-b929-2be3c4e645c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.394156191Z" level=info msg="Starting container: 7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63" id=c69656a7-dba6-4239-9132-3f8b6882c0fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.396463102Z" level=info msg="Started container" PID=1758 containerID=7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper id=c69656a7-dba6-4239-9132-3f8b6882c0fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=793a7b1ebcbe4cd5c3f13fc5d430ebb76c8e676903ece6e7443d36f40ec33e3b
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.479709467Z" level=info msg="Removing container: 8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95" id=6c8df97e-47f8-4547-a357-96168fa8e935 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:28:24 old-k8s-version-956814 crio[563]: time="2025-10-16T18:28:24.489399838Z" level=info msg="Removed container 8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9/dashboard-metrics-scraper" id=6c8df97e-47f8-4547-a357-96168fa8e935 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	7648e093dcf55       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   793a7b1ebcbe4       dashboard-metrics-scraper-5f989dc9cf-hfwf9       kubernetes-dashboard
	d0c68f3c4b250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   e7dc3aa9646da       storage-provisioner                              kube-system
	33d7dc72b038f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago       Running             kubernetes-dashboard        0                   b9e1183802a11       kubernetes-dashboard-8694d4445c-v4mf2            kubernetes-dashboard
	73533f2caeb1c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   1b281ab740139       coredns-5dd5756b68-kdcm7                         kube-system
	f85605290552a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   1f35f2b705b2a       kindnet-94l8q                                    kube-system
	308aac48e5218       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   0ac990010ee29       busybox                                          default
	1bef49206beb8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   e7dc3aa9646da       storage-provisioner                              kube-system
	995d1735348a9       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   ab717c8086eb2       kube-proxy-nkwcm                                 kube-system
	e6e794e317e67       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   e032a3f9fb1df       kube-apiserver-old-k8s-version-956814            kube-system
	e255d27c3903c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   e3dba99dc4aa0       etcd-old-k8s-version-956814                      kube-system
	58a737ae76bdf       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   d2d6991ef6a08       kube-controller-manager-old-k8s-version-956814   kube-system
	04c714a2b0c86       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   25238640272ce       kube-scheduler-old-k8s-version-956814            kube-system
	
	
	==> coredns [73533f2caeb1c8ba0bf7613592d2735400b8cced901f45bb29d25fb2ac5be519] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54403 - 9834 "HINFO IN 3006739082644964322.902435482909859467. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.49418236s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-956814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-956814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=old-k8s-version-956814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:26:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-956814
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:28:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:26:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:28:17 +0000   Thu, 16 Oct 2025 18:27:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-956814
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                16c7f49b-fe0a-4b26-a8a7-b5d233753b17
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-kdcm7                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-old-k8s-version-956814                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-94l8q                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-956814             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-956814    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-nkwcm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-956814             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-hfwf9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-v4mf2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-956814 event: Registered Node old-k8s-version-956814 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-956814 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-956814 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node old-k8s-version-956814 event: Registered Node old-k8s-version-956814 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [e255d27c3903c0fe570376a329840373a1ad5b5caca41fc82de4b5a229ebafb0] <==
	{"level":"info","ts":"2025-10-16T18:27:44.942529Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-16T18:27:44.942974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-16T18:27:44.943062Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-16T18:27:44.94317Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-16T18:27:44.943358Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:27:44.943423Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-16T18:27:44.944506Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-16T18:27:44.944814Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-16T18:27:44.944886Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-16T18:27:44.945369Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:27:44.945426Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-16T18:27:45.732832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.732897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.732946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-16T18:27:45.73297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.73298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.732992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.733003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-16T18:27:45.734051Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-956814 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-16T18:27:45.735192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:27:45.736556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-16T18:27:45.73658Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-16T18:27:45.735302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-16T18:27:45.736903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-16T18:27:45.74099Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 18:28:45 up  1:11,  0 user,  load average: 1.98, 2.35, 1.58
	Linux old-k8s-version-956814 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f85605290552a7127c754cdcd6384894e324c1a39c7ed2c5293fece11354cded] <==
	I1016 18:27:47.957505       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:27:47.957807       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:27:47.957990       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:27:47.958009       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:27:47.958040       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:27:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:27:48.235080       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:27:48.235141       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:27:48.235154       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:27:48.255319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:27:48.555894       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:27:48.555927       1 metrics.go:72] Registering metrics
	I1016 18:27:48.556001       1 controller.go:711] "Syncing nftables rules"
	I1016 18:27:58.235637       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:27:58.235671       1 main.go:301] handling current node
	I1016 18:28:08.235809       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:08.235853       1 main.go:301] handling current node
	I1016 18:28:18.235053       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:18.235095       1 main.go:301] handling current node
	I1016 18:28:28.236009       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:28.236040       1 main.go:301] handling current node
	I1016 18:28:38.242828       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:28:38.242866       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e6e794e317e67fe62de737c5d5d21f76ffd898adc393e7b8d3b5127f203478a3] <==
	I1016 18:27:46.814575       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:27:46.821650       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1016 18:27:46.821662       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:27:46.821709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1016 18:27:46.822780       1 shared_informer.go:318] Caches are synced for configmaps
	I1016 18:27:46.823870       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1016 18:27:46.823885       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1016 18:27:46.823888       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1016 18:27:46.823919       1 aggregator.go:166] initial CRD sync complete...
	I1016 18:27:46.823926       1 autoregister_controller.go:141] Starting autoregister controller
	I1016 18:27:46.823939       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:27:46.823947       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:27:46.844759       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1016 18:27:47.730251       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:27:47.818081       1 controller.go:624] quota admission added evaluator for: namespaces
	I1016 18:27:47.853743       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1016 18:27:47.875838       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:27:47.883194       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:27:47.891688       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1016 18:27:47.928626       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.19.146"}
	I1016 18:27:47.941256       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.93.160"}
	I1016 18:27:59.073402       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1016 18:27:59.180979       1 controller.go:624] quota admission added evaluator for: endpoints
	I1016 18:27:59.271635       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:27:59.271635       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [58a737ae76bdf77210a125a06ade45f191a00aba7f2561852cfb13f05b054511] <==
	I1016 18:27:59.095031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.707793ms"
	I1016 18:27:59.101467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.354673ms"
	I1016 18:27:59.101834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.704µs"
	I1016 18:27:59.105123       1 shared_informer.go:318] Caches are synced for PV protection
	I1016 18:27:59.105572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="16.509592ms"
	I1016 18:27:59.105663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.525µs"
	I1016 18:27:59.105775       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.248µs"
	I1016 18:27:59.114464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="108.716µs"
	I1016 18:27:59.131990       1 shared_informer.go:318] Caches are synced for endpoint
	I1016 18:27:59.149546       1 shared_informer.go:318] Caches are synced for resource quota
	I1016 18:27:59.182697       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1016 18:27:59.185213       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1016 18:27:59.193342       1 shared_informer.go:318] Caches are synced for resource quota
	I1016 18:27:59.508464       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:27:59.580933       1 shared_informer.go:318] Caches are synced for garbage collector
	I1016 18:27:59.580963       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1016 18:28:02.429583       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.978µs"
	I1016 18:28:03.436666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.599µs"
	I1016 18:28:04.438750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.254µs"
	I1016 18:28:06.448766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.911595ms"
	I1016 18:28:06.449016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="94.665µs"
	I1016 18:28:24.489291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="64.277µs"
	I1016 18:28:27.396120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.955815ms"
	I1016 18:28:27.396217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.817µs"
	I1016 18:28:29.401365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.825µs"
	
	
	==> kube-proxy [995d1735348a9b1f431941e2e8c8991ad732311551103b42bf5418984a4dddf1] <==
	I1016 18:27:47.824165       1 server_others.go:69] "Using iptables proxy"
	I1016 18:27:47.833923       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1016 18:27:47.856255       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:27:47.860003       1 server_others.go:152] "Using iptables Proxier"
	I1016 18:27:47.860081       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1016 18:27:47.860090       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1016 18:27:47.860123       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1016 18:27:47.860404       1 server.go:846] "Version info" version="v1.28.0"
	I1016 18:27:47.860425       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:27:47.861706       1 config.go:188] "Starting service config controller"
	I1016 18:27:47.861778       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1016 18:27:47.861780       1 config.go:315] "Starting node config controller"
	I1016 18:27:47.861797       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1016 18:27:47.861788       1 config.go:97] "Starting endpoint slice config controller"
	I1016 18:27:47.861827       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1016 18:27:47.962570       1 shared_informer.go:318] Caches are synced for node config
	I1016 18:27:47.962607       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1016 18:27:47.962612       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [04c714a2b0c86cdc256763ea2928fc53c7c7d744cb6468b9458d572797f2c163] <==
	W1016 18:27:46.810267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.810282       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1016 18:27:46.810364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1016 18:27:46.810388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1016 18:27:46.810402       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1016 18:27:46.810451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1016 18:27:46.810469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1016 18:27:46.810524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1016 18:27:46.811055       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1016 18:27:46.811011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1016 18:27:46.811133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1016 18:27:46.810579       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.811204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1016 18:27:46.811266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1016 18:27:46.810671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1016 18:27:46.811383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1016 18:27:46.810797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1016 18:27:46.811428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1016 18:27:46.810530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1016 18:27:46.811462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1016 18:27:46.811332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1016 18:27:48.003210       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.092686     716 topology_manager.go:215] "Topology Admit Handler" podUID="30ae3852-d8ac-427d-8da1-8439a752e2d4" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154828     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/30ae3852-d8ac-427d-8da1-8439a752e2d4-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-v4mf2\" (UID: \"30ae3852-d8ac-427d-8da1-8439a752e2d4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154889     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6769c2f0-74d8-4506-988d-c94ce7816b66-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-hfwf9\" (UID: \"6769c2f0-74d8-4506-988d-c94ce7816b66\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154928     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7488c\" (UniqueName: \"kubernetes.io/projected/6769c2f0-74d8-4506-988d-c94ce7816b66-kube-api-access-7488c\") pod \"dashboard-metrics-scraper-5f989dc9cf-hfwf9\" (UID: \"6769c2f0-74d8-4506-988d-c94ce7816b66\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9"
	Oct 16 18:27:59 old-k8s-version-956814 kubelet[716]: I1016 18:27:59.154980     716 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdzcp\" (UniqueName: \"kubernetes.io/projected/30ae3852-d8ac-427d-8da1-8439a752e2d4-kube-api-access-xdzcp\") pod \"kubernetes-dashboard-8694d4445c-v4mf2\" (UID: \"30ae3852-d8ac-427d-8da1-8439a752e2d4\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2"
	Oct 16 18:28:02 old-k8s-version-956814 kubelet[716]: I1016 18:28:02.416950     716 scope.go:117] "RemoveContainer" containerID="69080c51b97cd78b8ed0eb4f16d0eeec9aa89158f853e0fe3b2ded11ee015e31"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: I1016 18:28:03.422374     716 scope.go:117] "RemoveContainer" containerID="69080c51b97cd78b8ed0eb4f16d0eeec9aa89158f853e0fe3b2ded11ee015e31"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: I1016 18:28:03.422645     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:03 old-k8s-version-956814 kubelet[716]: E1016 18:28:03.423017     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:04 old-k8s-version-956814 kubelet[716]: I1016 18:28:04.426863     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:04 old-k8s-version-956814 kubelet[716]: E1016 18:28:04.427220     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:06 old-k8s-version-956814 kubelet[716]: I1016 18:28:06.441834     716 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-v4mf2" podStartSLOduration=1.314410847 podCreationTimestamp="2025-10-16 18:27:59 +0000 UTC" firstStartedPulling="2025-10-16 18:27:59.41615197 +0000 UTC m=+15.179465150" lastFinishedPulling="2025-10-16 18:28:05.543514188 +0000 UTC m=+21.306827368" observedRunningTime="2025-10-16 18:28:06.441682488 +0000 UTC m=+22.204995689" watchObservedRunningTime="2025-10-16 18:28:06.441773065 +0000 UTC m=+22.205086264"
	Oct 16 18:28:09 old-k8s-version-956814 kubelet[716]: I1016 18:28:09.391995     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:09 old-k8s-version-956814 kubelet[716]: E1016 18:28:09.392327     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:18 old-k8s-version-956814 kubelet[716]: I1016 18:28:18.461416     716 scope.go:117] "RemoveContainer" containerID="1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.349432     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.478537     716 scope.go:117] "RemoveContainer" containerID="8c83ecca0c41f4acd7cc323cfabab00f1a90d1daa256d9c7dc0e12f19acf8f95"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: I1016 18:28:24.478863     716 scope.go:117] "RemoveContainer" containerID="7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	Oct 16 18:28:24 old-k8s-version-956814 kubelet[716]: E1016 18:28:24.479235     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:29 old-k8s-version-956814 kubelet[716]: I1016 18:28:29.391686     716 scope.go:117] "RemoveContainer" containerID="7648e093dcf554392b6ee6e3cab35361de2ce6729397abb12e1d0b18c2956e63"
	Oct 16 18:28:29 old-k8s-version-956814 kubelet[716]: E1016 18:28:29.392117     716 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-hfwf9_kubernetes-dashboard(6769c2f0-74d8-4506-988d-c94ce7816b66)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-hfwf9" podUID="6769c2f0-74d8-4506-988d-c94ce7816b66"
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:28:41 old-k8s-version-956814 systemd[1]: kubelet.service: Consumed 1.561s CPU time.
	
	
	==> kubernetes-dashboard [33d7dc72b038f311f8b70695fd1551fb1ba18755060404c26beb1160688914ea] <==
	2025/10/16 18:28:05 Using namespace: kubernetes-dashboard
	2025/10/16 18:28:05 Using in-cluster config to connect to apiserver
	2025/10/16 18:28:05 Using secret token for csrf signing
	2025/10/16 18:28:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:28:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:28:05 Successful initial request to the apiserver, version: v1.28.0
	2025/10/16 18:28:05 Generating JWE encryption key
	2025/10/16 18:28:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:28:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:28:05 Initializing JWE encryption key from synchronized object
	2025/10/16 18:28:05 Creating in-cluster Sidecar client
	2025/10/16 18:28:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:05 Serving insecurely on HTTP port: 9090
	2025/10/16 18:28:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:05 Starting overwatch
	
	
	==> storage-provisioner [1bef49206beb8688b2e461115ae707bb2a04be3521d5212c1567cb3df756f6ff] <==
	I1016 18:27:47.770432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:28:17.773789       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d0c68f3c4b25019a937ecef6491d6fa56971a561a4450a4b9dc6ac28bdde0ed1] <==
	I1016 18:28:18.506100       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:28:18.514467       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:28:18.514521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1016 18:28:35.910506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:28:35.910587       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e1593ca-3024-4a18-b57d-738a19d42c4d", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00 became leader
	I1016 18:28:35.910669       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00!
	I1016 18:28:36.011494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-956814_df7cc492-0892-4f80-9cc9-cb066ff6fa00!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956814 -n old-k8s-version-956814
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956814 -n old-k8s-version-956814: exit status 2 (357.165796ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-956814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-808539 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-808539 --alsologtostderr -v=1: exit status 80 (1.761007155s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-808539 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:29:39.528946  259800 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:29:39.529232  259800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:39.529243  259800 out.go:374] Setting ErrFile to fd 2...
	I1016 18:29:39.529247  259800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:39.529485  259800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:29:39.529796  259800 out.go:368] Setting JSON to false
	I1016 18:29:39.529850  259800 mustload.go:65] Loading cluster: no-preload-808539
	I1016 18:29:39.530230  259800 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:39.530608  259800 cli_runner.go:164] Run: docker container inspect no-preload-808539 --format={{.State.Status}}
	I1016 18:29:39.549235  259800 host.go:66] Checking if "no-preload-808539" exists ...
	I1016 18:29:39.549484  259800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:39.610305  259800 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-16 18:29:39.598559699 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:39.610968  259800 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-808539 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:29:39.612895  259800 out.go:179] * Pausing node no-preload-808539 ... 
	I1016 18:29:39.614115  259800 host.go:66] Checking if "no-preload-808539" exists ...
	I1016 18:29:39.614368  259800 ssh_runner.go:195] Run: systemctl --version
	I1016 18:29:39.614423  259800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-808539
	I1016 18:29:39.632939  259800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/no-preload-808539/id_rsa Username:docker}
	I1016 18:29:39.731505  259800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:39.758196  259800 pause.go:52] kubelet running: true
	I1016 18:29:39.758303  259800 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:29:39.923835  259800 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:29:39.923920  259800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:29:39.989670  259800 cri.go:89] found id: "ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9"
	I1016 18:29:39.989694  259800 cri.go:89] found id: "3de7cf0205d7d6eeac5cc2e822d62c8b8946ba8f92cbf91e763dd4318fd7e3c7"
	I1016 18:29:39.989698  259800 cri.go:89] found id: "a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2"
	I1016 18:29:39.989701  259800 cri.go:89] found id: "9af550e59feffaa80d88161ffa36ffd9b00a7f1c63f27efce7435d4fb3f0f71a"
	I1016 18:29:39.989703  259800 cri.go:89] found id: "c0468f3a79d7d838f56df1eb32a946b34b2c3ab791c04e2980dbd98bdf6559e9"
	I1016 18:29:39.989706  259800 cri.go:89] found id: "916c3b6d662439d89a451d927be5cafe6a0fca42419d42bd59af6042bb15ceea"
	I1016 18:29:39.989709  259800 cri.go:89] found id: "4f293fe8269d1d295e9d15b52d72bb19e3d1f3c9099a4102dec127e207a05b13"
	I1016 18:29:39.989711  259800 cri.go:89] found id: "7181b04bfb82e037325297ecffa17ead24bea639b33b265693a70609af2e891c"
	I1016 18:29:39.989724  259800 cri.go:89] found id: "36d3ec65570d3105d713c2d5a8f592c5757f5b797e08265d5e50fa232714f4ec"
	I1016 18:29:39.989753  259800 cri.go:89] found id: "08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	I1016 18:29:39.989758  259800 cri.go:89] found id: "91a77615ada5800866478c73b61ad9458c9aab68602263b4fbb76cbe49d2c275"
	I1016 18:29:39.989761  259800 cri.go:89] found id: ""
	I1016 18:29:39.989807  259800 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:29:40.001913  259800 retry.go:31] will retry after 306.162599ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:29:40Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:29:40.308406  259800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:40.322742  259800 pause.go:52] kubelet running: false
	I1016 18:29:40.322807  259800 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:29:40.466508  259800 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:29:40.466601  259800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:29:40.534502  259800 cri.go:89] found id: "ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9"
	I1016 18:29:40.534530  259800 cri.go:89] found id: "3de7cf0205d7d6eeac5cc2e822d62c8b8946ba8f92cbf91e763dd4318fd7e3c7"
	I1016 18:29:40.534538  259800 cri.go:89] found id: "a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2"
	I1016 18:29:40.534543  259800 cri.go:89] found id: "9af550e59feffaa80d88161ffa36ffd9b00a7f1c63f27efce7435d4fb3f0f71a"
	I1016 18:29:40.534547  259800 cri.go:89] found id: "c0468f3a79d7d838f56df1eb32a946b34b2c3ab791c04e2980dbd98bdf6559e9"
	I1016 18:29:40.534552  259800 cri.go:89] found id: "916c3b6d662439d89a451d927be5cafe6a0fca42419d42bd59af6042bb15ceea"
	I1016 18:29:40.534556  259800 cri.go:89] found id: "4f293fe8269d1d295e9d15b52d72bb19e3d1f3c9099a4102dec127e207a05b13"
	I1016 18:29:40.534560  259800 cri.go:89] found id: "7181b04bfb82e037325297ecffa17ead24bea639b33b265693a70609af2e891c"
	I1016 18:29:40.534564  259800 cri.go:89] found id: "36d3ec65570d3105d713c2d5a8f592c5757f5b797e08265d5e50fa232714f4ec"
	I1016 18:29:40.534578  259800 cri.go:89] found id: "08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	I1016 18:29:40.534580  259800 cri.go:89] found id: "91a77615ada5800866478c73b61ad9458c9aab68602263b4fbb76cbe49d2c275"
	I1016 18:29:40.534583  259800 cri.go:89] found id: ""
	I1016 18:29:40.534628  259800 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:29:40.546620  259800 retry.go:31] will retry after 412.726206ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:29:40Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:29:40.959912  259800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:40.973552  259800 pause.go:52] kubelet running: false
	I1016 18:29:40.973613  259800 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:29:41.144574  259800 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:29:41.144656  259800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:29:41.218018  259800 cri.go:89] found id: "ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9"
	I1016 18:29:41.218047  259800 cri.go:89] found id: "3de7cf0205d7d6eeac5cc2e822d62c8b8946ba8f92cbf91e763dd4318fd7e3c7"
	I1016 18:29:41.218054  259800 cri.go:89] found id: "a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2"
	I1016 18:29:41.218068  259800 cri.go:89] found id: "9af550e59feffaa80d88161ffa36ffd9b00a7f1c63f27efce7435d4fb3f0f71a"
	I1016 18:29:41.218073  259800 cri.go:89] found id: "c0468f3a79d7d838f56df1eb32a946b34b2c3ab791c04e2980dbd98bdf6559e9"
	I1016 18:29:41.218077  259800 cri.go:89] found id: "916c3b6d662439d89a451d927be5cafe6a0fca42419d42bd59af6042bb15ceea"
	I1016 18:29:41.218080  259800 cri.go:89] found id: "4f293fe8269d1d295e9d15b52d72bb19e3d1f3c9099a4102dec127e207a05b13"
	I1016 18:29:41.218083  259800 cri.go:89] found id: "7181b04bfb82e037325297ecffa17ead24bea639b33b265693a70609af2e891c"
	I1016 18:29:41.218086  259800 cri.go:89] found id: "36d3ec65570d3105d713c2d5a8f592c5757f5b797e08265d5e50fa232714f4ec"
	I1016 18:29:41.218095  259800 cri.go:89] found id: "08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	I1016 18:29:41.218099  259800 cri.go:89] found id: "91a77615ada5800866478c73b61ad9458c9aab68602263b4fbb76cbe49d2c275"
	I1016 18:29:41.218103  259800 cri.go:89] found id: ""
	I1016 18:29:41.218154  259800 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:29:41.233147  259800 out.go:203] 
	W1016 18:29:41.234657  259800 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:29:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:29:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:29:41.234678  259800 out.go:285] * 
	* 
	W1016 18:29:41.238813  259800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:29:41.241520  259800 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-808539 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-808539
helpers_test.go:243: (dbg) docker inspect no-preload-808539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	        "Created": "2025-10-16T18:27:19.34518913Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:28:38.24161274Z",
	            "FinishedAt": "2025-10-16T18:28:37.405919085Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hosts",
	        "LogPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674-json.log",
	        "Name": "/no-preload-808539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-808539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-808539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	                "LowerDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/merged",
	                "UpperDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/diff",
	                "WorkDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-808539",
	                "Source": "/var/lib/docker/volumes/no-preload-808539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-808539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-808539",
	                "name.minikube.sigs.k8s.io": "no-preload-808539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01cfbd7e3b6a4580dfa96c52130c4aa91cb0a438413e236ed53b2f26370660e1",
	            "SandboxKey": "/var/run/docker/netns/01cfbd7e3b6a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-808539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:90:06:f8:1f:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38dc5e7162482fea5b37cb1ee9d81ad023804ad94f7487798d7ddee0954e300e",
	                    "EndpointID": "11c5e5bc704a28b128dd8cb214ab5a4c51aedf7f59c06213c99194eadbf8d464",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-808539",
	                        "ee665d228e59"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539: exit status 2 (346.749781ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-808539 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-808539 logs -n 25: (1.302314239s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:29:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:29:07.040256  254209 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:29:07.040551  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040562  254209 out.go:374] Setting ErrFile to fd 2...
	I1016 18:29:07.040565  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040803  254209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:29:07.041325  254209 out.go:368] Setting JSON to false
	I1016 18:29:07.042806  254209 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4295,"bootTime":1760635052,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:29:07.042932  254209 start.go:141] virtualization: kvm guest
	I1016 18:29:07.045364  254209 out.go:179] * [default-k8s-diff-port-523257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:29:07.046957  254209 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:29:07.046958  254209 notify.go:220] Checking for updates...
	I1016 18:29:07.050966  254209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:29:07.052908  254209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:07.054502  254209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:29:07.055956  254209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:29:07.057344  254209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:29:07.059464  254209 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059605  254209 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059765  254209 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059863  254209 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:29:07.085980  254209 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:29:07.086152  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.152740  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.141947952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.152862  254209 docker.go:318] overlay module found
	I1016 18:29:07.154961  254209 out.go:179] * Using the docker driver based on user configuration
	I1016 18:29:07.156386  254209 start.go:305] selected driver: docker
	I1016 18:29:07.156405  254209 start.go:925] validating driver "docker" against <nil>
	I1016 18:29:07.156417  254209 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:29:07.157063  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.222394  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.211344644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.222535  254209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:29:07.222748  254209 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:07.224789  254209 out.go:179] * Using Docker driver with root privileges
	I1016 18:29:07.226432  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:07.226503  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:07.226522  254209 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:29:07.226597  254209 start.go:349] cluster config:
	{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:07.228189  254209 out.go:179] * Starting "default-k8s-diff-port-523257" primary control-plane node in "default-k8s-diff-port-523257" cluster
	I1016 18:29:07.229711  254209 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:29:07.231414  254209 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:29:07.232838  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.232890  254209 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:29:07.232901  254209 cache.go:58] Caching tarball of preloaded images
	I1016 18:29:07.232950  254209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:29:07.233007  254209 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:29:07.233023  254209 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:29:07.233110  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:07.233129  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json: {Name:mkc8f0a47ba498cd8655372776f58860c7a1a49d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:07.255362  254209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:29:07.255388  254209 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:29:07.255409  254209 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:29:07.255451  254209 start.go:360] acquireMachinesLock for default-k8s-diff-port-523257: {Name:mk0ef672dc84306ea126d15d9b249684df6a69ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:29:07.255579  254209 start.go:364] duration metric: took 109.249µs to acquireMachinesLock for "default-k8s-diff-port-523257"
	I1016 18:29:07.255609  254209 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:07.255702  254209 start.go:125] createHost starting for "" (driver="docker")
	W1016 18:29:05.418755  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:07.419105  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:04.081460  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:04.081500  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:06.598777  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:06.599234  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:06.599283  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:06.599337  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:06.632534  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:06.632559  228782 cri.go:89] found id: ""
	I1016 18:29:06.632566  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:06.632623  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.636735  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:06.636800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:06.670881  228782 cri.go:89] found id: ""
	I1016 18:29:06.670915  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.670928  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:06.670937  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:06.670990  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:06.701324  228782 cri.go:89] found id: ""
	I1016 18:29:06.701352  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.701362  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:06.701370  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:06.701431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:06.735895  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.735922  228782 cri.go:89] found id: ""
	I1016 18:29:06.735930  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:06.735980  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.741105  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:06.741178  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:06.774597  228782 cri.go:89] found id: ""
	I1016 18:29:06.774618  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.774625  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:06.774632  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:06.774674  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:06.806134  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.806153  228782 cri.go:89] found id: ""
	I1016 18:29:06.806163  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:06.806215  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.811555  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:06.811627  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:06.846430  228782 cri.go:89] found id: ""
	I1016 18:29:06.846456  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.846465  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:06.846472  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:06.846528  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:06.878395  228782 cri.go:89] found id: ""
	I1016 18:29:06.878419  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.878430  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:06.878440  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:06.878454  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.938432  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:06.938467  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.970056  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:06.970085  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:07.027971  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:07.028000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:07.064564  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:07.064596  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:07.164562  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:07.164594  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:07.185438  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:07.185470  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:07.260040  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:07.260063  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:07.260077  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:07.258815  254209 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:29:07.259101  254209 start.go:159] libmachine.API.Create for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:07.259145  254209 client.go:168] LocalClient.Create starting
	I1016 18:29:07.259324  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:29:07.259400  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259427  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.259512  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:29:07.259555  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259573  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.260104  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:29:07.281148  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:29:07.281225  254209 network_create.go:284] running [docker network inspect default-k8s-diff-port-523257] to gather additional debugging logs...
	I1016 18:29:07.281243  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257
	W1016 18:29:07.301649  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 returned with exit code 1
	I1016 18:29:07.301683  254209 network_create.go:287] error running [docker network inspect default-k8s-diff-port-523257]: docker network inspect default-k8s-diff-port-523257: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-523257 not found
	I1016 18:29:07.301701  254209 network_create.go:289] output of [docker network inspect default-k8s-diff-port-523257]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-523257 not found
	
	** /stderr **
	I1016 18:29:07.301822  254209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:07.322829  254209 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:29:07.323663  254209 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:29:07.324428  254209 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:29:07.324921  254209 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07ac2eb0982 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:2a:d5:21:5c:9c} reservation:<nil>}
	I1016 18:29:07.325701  254209 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea8b80}
	I1016 18:29:07.325766  254209 network_create.go:124] attempt to create docker network default-k8s-diff-port-523257 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 18:29:07.325819  254209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 default-k8s-diff-port-523257
	I1016 18:29:07.389443  254209 network_create.go:108] docker network default-k8s-diff-port-523257 192.168.85.0/24 created
	I1016 18:29:07.389474  254209 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-523257" container
	I1016 18:29:07.389534  254209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:29:07.408685  254209 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-523257 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:29:07.429641  254209 oci.go:103] Successfully created a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.429766  254209 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-523257-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --entrypoint /usr/bin/test -v default-k8s-diff-port-523257:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:29:07.867408  254209 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.867462  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.867483  254209 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:29:07.867554  254209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:29:11.718052  254209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.850427538s)
	I1016 18:29:11.718089  254209 kic.go:203] duration metric: took 3.850601984s to extract preloaded images to volume ...
	W1016 18:29:11.718202  254209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:29:11.718242  254209 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:29:11.718287  254209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:29:11.783561  254209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-523257 --name default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --network default-k8s-diff-port-523257 --ip 192.168.85.2 --volume default-k8s-diff-port-523257:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	W1016 18:29:09.920187  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:11.920840  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:09.798326  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:09.798815  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:09.798876  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:09.798935  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:09.834829  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:09.834862  228782 cri.go:89] found id: ""
	I1016 18:29:09.834871  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:09.834929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.840366  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:09.840444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:09.872774  228782 cri.go:89] found id: ""
	I1016 18:29:09.872802  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.872812  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:09.872819  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:09.872878  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:09.909210  228782 cri.go:89] found id: ""
	I1016 18:29:09.909236  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.909247  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:09.909255  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:09.909312  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:09.945086  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:09.945108  228782 cri.go:89] found id: ""
	I1016 18:29:09.945117  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:09.945174  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.950041  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:09.950103  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:09.987902  228782 cri.go:89] found id: ""
	I1016 18:29:09.987927  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.987938  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:09.987949  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:09.988003  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:10.021037  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:10.021074  228782 cri.go:89] found id: ""
	I1016 18:29:10.021082  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:10.021134  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:10.026004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:10.026077  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:10.055087  228782 cri.go:89] found id: ""
	I1016 18:29:10.055111  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.055121  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:10.055135  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:10.055193  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:10.085674  228782 cri.go:89] found id: ""
	I1016 18:29:10.085703  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.085737  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:10.085750  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:10.085763  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:10.164177  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:10.164213  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:10.199764  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:10.199797  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:10.318961  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:10.318998  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:10.347541  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:10.347582  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:10.426635  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:10.426658  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:10.426673  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:10.460893  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:10.460927  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:10.514361  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:10.514395  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.045784  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:13.046220  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:13.046274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:13.046330  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:13.079185  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.079212  228782 cri.go:89] found id: ""
	I1016 18:29:13.079222  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:13.079289  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.083978  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:13.084050  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:13.114350  228782 cri.go:89] found id: ""
	I1016 18:29:13.114374  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.114385  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:13.114392  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:13.114444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:13.141976  228782 cri.go:89] found id: ""
	I1016 18:29:13.142002  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.142010  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:13.142016  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:13.142086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:13.174818  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.174848  228782 cri.go:89] found id: ""
	I1016 18:29:13.174858  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:13.174909  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.179004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:13.179070  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:13.214403  228782 cri.go:89] found id: ""
	I1016 18:29:13.214431  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.214442  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:13.214449  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:13.214507  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:13.246810  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.246834  228782 cri.go:89] found id: ""
	I1016 18:29:13.246844  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:13.246902  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.251623  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:13.251685  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:13.283291  228782 cri.go:89] found id: ""
	I1016 18:29:13.283318  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.283329  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:13.283339  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:13.283388  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:13.311343  228782 cri.go:89] found id: ""
	I1016 18:29:13.311368  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.311376  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:13.311383  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:13.311396  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:13.368339  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:13.368377  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:13.398197  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:13.398227  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:13.511753  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:13.511788  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:13.529854  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:13.529890  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:13.602327  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:13.602347  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:13.602359  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.636600  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:13.636635  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.688431  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:13.688469  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:14.812495  249491 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:14.812565  249491 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:14.812651  249491 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:14.812697  249491 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:14.812750  249491 kubeadm.go:318] OS: Linux
	I1016 18:29:14.812798  249491 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:14.812846  249491 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:14.812885  249491 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:14.812952  249491 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:14.812998  249491 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:14.813044  249491 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:14.813153  249491 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:14.813231  249491 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:14.813325  249491 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:14.813441  249491 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:14.813562  249491 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:14.813642  249491 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:14.815445  249491 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:14.815539  249491 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:14.815602  249491 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:14.815663  249491 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:14.815743  249491 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:14.815797  249491 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:14.815883  249491 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:14.815954  249491 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:14.816076  249491 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816123  249491 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:14.816240  249491 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816345  249491 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:14.816434  249491 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:14.816488  249491 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:14.816537  249491 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:14.816611  249491 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:14.816701  249491 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:14.816787  249491 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:14.816885  249491 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:14.816956  249491 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:14.817026  249491 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:14.817091  249491 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:14.818496  249491 out.go:252]   - Booting up control plane ...
	I1016 18:29:14.818580  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:14.818643  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:14.818755  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:14.818887  249491 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:14.819010  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:14.819110  249491 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:14.819187  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:14.819224  249491 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:14.819345  249491 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:14.819458  249491 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:14.819519  249491 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500924512s
	I1016 18:29:14.819610  249491 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:14.819682  249491 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1016 18:29:14.819785  249491 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:14.819861  249491 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:14.819937  249491 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.311071654s
	I1016 18:29:14.819995  249491 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.104436473s
	I1016 18:29:14.820062  249491 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.00209408s
	I1016 18:29:14.820157  249491 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:14.820281  249491 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:14.820375  249491 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:14.820585  249491 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-063117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:14.820666  249491 kubeadm.go:318] [bootstrap-token] Using token: 5rsifa.smk486u4t69rbatb
	I1016 18:29:14.822434  249491 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:14.822560  249491 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:14.822656  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:14.822845  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:14.823060  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:14.823170  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:14.823249  249491 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:14.823359  249491 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:14.823399  249491 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:14.823440  249491 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:14.823446  249491 kubeadm.go:318] 
	I1016 18:29:14.823500  249491 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:14.823519  249491 kubeadm.go:318] 
	I1016 18:29:14.823599  249491 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:14.823606  249491 kubeadm.go:318] 
	I1016 18:29:14.823628  249491 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:14.823679  249491 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:14.823767  249491 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:14.823775  249491 kubeadm.go:318] 
	I1016 18:29:14.823844  249491 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:14.823859  249491 kubeadm.go:318] 
	I1016 18:29:14.823926  249491 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:14.823936  249491 kubeadm.go:318] 
	I1016 18:29:14.824017  249491 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:14.824127  249491 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:14.824285  249491 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:14.824304  249491 kubeadm.go:318] 
	I1016 18:29:14.824446  249491 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:14.824583  249491 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:14.824596  249491 kubeadm.go:318] 
	I1016 18:29:14.824739  249491 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.824843  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:14.824866  249491 kubeadm.go:318] 	--control-plane 
	I1016 18:29:14.824870  249491 kubeadm.go:318] 
	I1016 18:29:14.824963  249491 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:14.824974  249491 kubeadm.go:318] 
	I1016 18:29:14.825046  249491 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.825152  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:14.825162  249491 cni.go:84] Creating CNI manager for ""
	I1016 18:29:14.825169  249491 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:14.826898  249491 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:12.063356  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Running}}
	I1016 18:29:12.082378  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.101794  254209 cli_runner.go:164] Run: docker exec default-k8s-diff-port-523257 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:29:12.150828  254209 oci.go:144] the created container "default-k8s-diff-port-523257" has a running status.
	I1016 18:29:12.150862  254209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa...
	I1016 18:29:12.360966  254209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:29:12.395477  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.421296  254209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:29:12.421318  254209 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-523257 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:29:12.475647  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.500605  254209 machine.go:93] provisionDockerMachine start ...
	I1016 18:29:12.500741  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.520832  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.521147  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.521169  254209 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:29:12.668259  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.668290  254209 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523257"
	I1016 18:29:12.668359  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.690556  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.690997  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.691041  254209 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523257 && echo "default-k8s-diff-port-523257" | sudo tee /etc/hostname
	I1016 18:29:12.853318  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.853397  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.874368  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.875979  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.876032  254209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523257/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:29:13.023166  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:29:13.023197  254209 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:29:13.023247  254209 ubuntu.go:190] setting up certificates
	I1016 18:29:13.023261  254209 provision.go:84] configureAuth start
	I1016 18:29:13.023324  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.044297  254209 provision.go:143] copyHostCerts
	I1016 18:29:13.044377  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:29:13.044387  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:29:13.044480  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:29:13.044612  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:29:13.044620  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:29:13.044665  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:29:13.044833  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:29:13.044854  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:29:13.044899  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:29:13.044986  254209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523257 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-523257 localhost minikube]
	I1016 18:29:13.322042  254209 provision.go:177] copyRemoteCerts
	I1016 18:29:13.322098  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:29:13.322130  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.341345  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.443517  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:29:13.466314  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:29:13.488307  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 18:29:13.510900  254209 provision.go:87] duration metric: took 487.621457ms to configureAuth
	I1016 18:29:13.510932  254209 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:29:13.511156  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:13.511275  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.533416  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:13.533709  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:13.533754  254209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:29:13.799038  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:29:13.799068  254209 machine.go:96] duration metric: took 1.2984414s to provisionDockerMachine
	I1016 18:29:13.799083  254209 client.go:171] duration metric: took 6.539927602s to LocalClient.Create
	I1016 18:29:13.799111  254209 start.go:167] duration metric: took 6.540012376s to libmachine.API.Create "default-k8s-diff-port-523257"
	I1016 18:29:13.799126  254209 start.go:293] postStartSetup for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:13.799140  254209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:29:13.799211  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:29:13.799291  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.819622  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.924749  254209 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:29:13.928900  254209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:29:13.928949  254209 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:29:13.928962  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:29:13.929014  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:29:13.929153  254209 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:29:13.929270  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:29:13.938068  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:13.959350  254209 start.go:296] duration metric: took 160.208327ms for postStartSetup
	I1016 18:29:13.959772  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.981564  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:13.981929  254209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:29:13.981986  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.002862  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.105028  254209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:29:14.109906  254209 start.go:128] duration metric: took 6.854190815s to createHost
	I1016 18:29:14.109928  254209 start.go:83] releasing machines lock for "default-k8s-diff-port-523257", held for 6.854337757s
	I1016 18:29:14.109985  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:14.129342  254209 ssh_runner.go:195] Run: cat /version.json
	I1016 18:29:14.129364  254209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:29:14.129388  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.129427  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.148145  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.148510  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.301264  254209 ssh_runner.go:195] Run: systemctl --version
	I1016 18:29:14.308012  254209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:29:14.343595  254209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:29:14.348610  254209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:29:14.348680  254209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:29:14.374585  254209 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:29:14.374606  254209 start.go:495] detecting cgroup driver to use...
	I1016 18:29:14.374641  254209 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:29:14.374699  254209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:29:14.390967  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:29:14.404114  254209 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:29:14.404173  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:29:14.423858  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:29:14.443353  254209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:29:14.528065  254209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:29:14.616017  254209 docker.go:234] disabling docker service ...
	I1016 18:29:14.616093  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:29:14.636286  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:29:14.649917  254209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:29:14.738496  254209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:29:14.830481  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:29:14.844213  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:29:14.860041  254209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:29:14.860111  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.871530  254209 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:29:14.871599  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.882155  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.891583  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.901751  254209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:29:14.911126  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.923235  254209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.940508  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.951261  254209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:29:14.961600  254209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:29:14.969949  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.065750  254209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:29:15.196909  254209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:29:15.197013  254209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:29:15.201701  254209 start.go:563] Will wait 60s for crictl version
	I1016 18:29:15.201777  254209 ssh_runner.go:195] Run: which crictl
	I1016 18:29:15.205695  254209 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:29:15.235561  254209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:29:15.235649  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.265880  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.296467  254209 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:29:15.297746  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:15.315570  254209 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 18:29:15.319846  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.330320  254209 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:29:15.330442  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:15.330496  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.362598  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.362621  254209 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:29:15.362681  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.388591  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.388610  254209 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:29:15.388617  254209 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1016 18:29:15.388687  254209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:29:15.388767  254209 ssh_runner.go:195] Run: crio config
	I1016 18:29:15.438126  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:15.438153  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:15.438169  254209 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:29:15.438189  254209 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523257 NodeName:default-k8s-diff-port-523257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:29:15.438304  254209 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:29:15.438360  254209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:29:15.446851  254209 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:29:15.446904  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:29:15.455376  254209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 18:29:15.468422  254209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:29:15.485061  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1016 18:29:15.499028  254209 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:29:15.502992  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.514119  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.600483  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:15.628358  254209 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257 for IP: 192.168.85.2
	I1016 18:29:15.628376  254209 certs.go:195] generating shared ca certs ...
	I1016 18:29:15.628396  254209 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.628509  254209 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:29:15.628562  254209 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:29:15.628573  254209 certs.go:257] generating profile certs ...
	I1016 18:29:15.628628  254209 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key
	I1016 18:29:15.628653  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt with IP's: []
	I1016 18:29:15.968981  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt ...
	I1016 18:29:15.969015  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt: {Name:mkc48781ddaf69d7e01ca677e4849b4caaee56c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969236  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key ...
	I1016 18:29:15.969256  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key: {Name:mkc621b8b4bfad359a056391feef8110384c6c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969390  254209 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c
	I1016 18:29:15.969417  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 18:29:16.391278  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c ...
	I1016 18:29:16.391304  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c: {Name:mk6cc283b84aa2fe24d23bc336c141b44112e826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391464  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c ...
	I1016 18:29:16.391483  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c: {Name:mkcaa57ee51fbf6de8c055b9c377d12f3a0aabf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391560  254209 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt
	I1016 18:29:16.391667  254209 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key
	I1016 18:29:16.391772  254209 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key
	I1016 18:29:16.391791  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt with IP's: []
	I1016 18:29:16.512660  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt ...
	I1016 18:29:16.512692  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt: {Name:mk2207d19f2814a793ac863fddc556c919eb7e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.512893  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key ...
	I1016 18:29:16.512912  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key: {Name:mk634f24088d880b43b87026568c66491c8f3f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.513157  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:29:16.513208  254209 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:29:16.513224  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:29:16.513258  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:29:16.513299  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:29:16.513332  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:29:16.513390  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:16.514000  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:29:16.534467  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:29:16.553911  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:29:16.572888  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:29:16.593316  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:29:16.613396  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:29:16.633847  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:29:16.652859  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:29:16.671301  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:29:16.692139  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:29:16.711854  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:29:16.733100  254209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:29:16.748870  254209 ssh_runner.go:195] Run: openssl version
	I1016 18:29:16.756698  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:29:16.765852  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770890  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770951  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.809579  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:29:16.818448  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:29:16.828572  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833466  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833518  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.869942  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:29:16.879161  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:29:16.888390  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892672  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892743  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.928324  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:29:16.937883  254209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:29:16.941427  254209 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:29:16.941477  254209 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:16.941533  254209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:29:16.941590  254209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:29:16.969823  254209 cri.go:89] found id: ""
	I1016 18:29:16.969879  254209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:29:16.978105  254209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:29:16.986454  254209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:29:16.986509  254209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:29:16.994659  254209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:29:16.994677  254209 kubeadm.go:157] found existing configuration files:
	
	I1016 18:29:16.994734  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1016 18:29:17.002515  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:29:17.002569  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:29:17.010005  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1016 18:29:17.017762  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:29:17.017809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:29:17.025281  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1016 18:29:17.033745  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:29:17.033809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	W1016 18:29:14.418032  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:16.918331  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:16.216787  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:16.217184  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:16.217232  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:16.217290  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:16.260046  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.260069  228782 cri.go:89] found id: ""
	I1016 18:29:16.260081  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:16.260138  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.264404  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:16.264461  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:16.292812  228782 cri.go:89] found id: ""
	I1016 18:29:16.292840  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.292849  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:16.292857  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:16.292916  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:16.320501  228782 cri.go:89] found id: ""
	I1016 18:29:16.320525  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.320537  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:16.320543  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:16.320601  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:16.349176  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:16.349201  228782 cri.go:89] found id: ""
	I1016 18:29:16.349211  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:16.349261  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.353478  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:16.353557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:16.381526  228782 cri.go:89] found id: ""
	I1016 18:29:16.381551  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.381560  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:16.381566  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:16.381622  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:16.410669  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.410688  228782 cri.go:89] found id: ""
	I1016 18:29:16.410698  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:16.410766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.415132  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:16.415201  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:16.444976  228782 cri.go:89] found id: ""
	I1016 18:29:16.445004  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.445015  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:16.445023  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:16.445079  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:16.476137  228782 cri.go:89] found id: ""
	I1016 18:29:16.476164  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.476174  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:16.476185  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:16.476198  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.507953  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:16.507978  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:16.570051  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:16.570092  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:16.603032  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:16.603070  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:16.693780  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:16.693814  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:16.710844  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:16.710881  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:16.773893  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:16.773917  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:16.773931  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.807340  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:16.807368  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:14.828263  249491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:14.833625  249491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:14.833646  249491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:14.848089  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:15.084417  249491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:15.084527  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.084544  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-063117 minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=embed-certs-063117 minikube.k8s.io/primary=true
	I1016 18:29:15.180501  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.180512  249491 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:15.681132  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.180980  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.681259  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.181627  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.681148  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.180852  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.681519  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.180964  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.681224  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.769979  249491 kubeadm.go:1113] duration metric: took 4.685530547s to wait for elevateKubeSystemPrivileges
	I1016 18:29:19.770014  249491 kubeadm.go:402] duration metric: took 18.251827782s to StartCluster
	I1016 18:29:19.770034  249491 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.770128  249491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:19.771546  249491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.771780  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:19.771795  249491 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:19.771842  249491 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:19.771949  249491 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-063117"
	I1016 18:29:19.771958  249491 addons.go:69] Setting default-storageclass=true in profile "embed-certs-063117"
	I1016 18:29:19.771971  249491 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-063117"
	I1016 18:29:19.771979  249491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-063117"
	I1016 18:29:19.771979  249491 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:19.772007  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.772413  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.772558  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.776284  249491 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:19.777682  249491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:19.800564  249491 addons.go:238] Setting addon default-storageclass=true in "embed-certs-063117"
	I1016 18:29:19.800668  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.801165  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.803130  249491 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:19.804678  249491 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.804699  249491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:19.804856  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.826115  249491 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:19.826138  249491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:19.826207  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.832338  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.861747  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.882221  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:19.965940  249491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:19.969094  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.987077  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:20.101590  249491 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:20.105376  249491 node_ready.go:35] waiting up to 6m0s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:20.328792  249491 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:17.041611  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1016 18:29:17.049911  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:29:17.049971  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:29:17.058089  254209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:29:17.137219  254209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:29:17.203085  254209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1016 18:29:19.418382  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:21.918282  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:19.359592  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:19.360042  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:19.360098  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:19.360144  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:19.393040  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:19.393066  228782 cri.go:89] found id: ""
	I1016 18:29:19.393076  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:19.393131  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.397814  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:19.397881  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:19.427286  228782 cri.go:89] found id: ""
	I1016 18:29:19.427314  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.427322  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:19.427327  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:19.427375  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:19.462227  228782 cri.go:89] found id: ""
	I1016 18:29:19.462266  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.462279  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:19.462287  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:19.462348  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:19.496749  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.496774  228782 cri.go:89] found id: ""
	I1016 18:29:19.496783  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:19.496840  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.501521  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:19.501595  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:19.529247  228782 cri.go:89] found id: ""
	I1016 18:29:19.529274  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.529289  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:19.529296  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:19.529359  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:19.564781  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.564804  228782 cri.go:89] found id: ""
	I1016 18:29:19.564814  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:19.564929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.570532  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:19.570606  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:19.604855  228782 cri.go:89] found id: ""
	I1016 18:29:19.604883  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.604893  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:19.604901  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:19.604953  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:19.638992  228782 cri.go:89] found id: ""
	I1016 18:29:19.639022  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.639034  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:19.639045  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:19.639061  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.701460  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:19.701505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.742847  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:19.742874  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:19.829432  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:19.829906  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:19.877323  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:19.877363  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:20.013993  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:20.014026  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:20.033495  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:20.033528  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:20.125927  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:20.125955  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:20.125979  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.676779  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:22.677325  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:22.677386  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:22.677441  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:22.704967  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.704992  228782 cri.go:89] found id: ""
	I1016 18:29:22.705001  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:22.705054  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.709172  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:22.709227  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:22.737460  228782 cri.go:89] found id: ""
	I1016 18:29:22.737488  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.737497  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:22.737502  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:22.737557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:22.765144  228782 cri.go:89] found id: ""
	I1016 18:29:22.765167  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.765174  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:22.765182  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:22.765234  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:22.794804  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:22.794830  228782 cri.go:89] found id: ""
	I1016 18:29:22.794842  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:22.794896  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.799171  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:22.799236  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:22.826223  228782 cri.go:89] found id: ""
	I1016 18:29:22.826245  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.826254  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:22.826262  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:22.826320  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:22.853663  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:22.853687  228782 cri.go:89] found id: ""
	I1016 18:29:22.853697  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:22.853766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.857917  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:22.857976  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:22.886082  228782 cri.go:89] found id: ""
	I1016 18:29:22.886104  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.886111  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:22.886116  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:22.886161  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:22.914756  228782 cri.go:89] found id: ""
	I1016 18:29:22.914785  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.914795  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:22.914806  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:22.914819  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:22.948094  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:22.948123  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:23.063153  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:23.063191  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:23.086210  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:23.086246  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:23.158625  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:23.158644  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:23.158655  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:23.196125  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:23.196164  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:23.249568  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:23.249603  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:23.278700  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:23.278755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:20.330231  249491 addons.go:514] duration metric: took 558.387286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:20.605751  249491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-063117" context rescaled to 1 replicas
	W1016 18:29:22.108077  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:24.108885  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:23.920030  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:26.418984  245371 pod_ready.go:94] pod "coredns-66bc5c9577-ntqqg" is "Ready"
	I1016 18:29:26.419015  245371 pod_ready.go:86] duration metric: took 37.506349558s for pod "coredns-66bc5c9577-ntqqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.421830  245371 pod_ready.go:83] waiting for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.426444  245371 pod_ready.go:94] pod "etcd-no-preload-808539" is "Ready"
	I1016 18:29:26.426468  245371 pod_ready.go:86] duration metric: took 4.611842ms for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.428754  245371 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.433020  245371 pod_ready.go:94] pod "kube-apiserver-no-preload-808539" is "Ready"
	I1016 18:29:26.433042  245371 pod_ready.go:86] duration metric: took 4.265191ms for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.435232  245371 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.616325  245371 pod_ready.go:94] pod "kube-controller-manager-no-preload-808539" is "Ready"
	I1016 18:29:26.616358  245371 pod_ready.go:86] duration metric: took 181.098764ms for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.816373  245371 pod_ready.go:83] waiting for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.217098  245371 pod_ready.go:94] pod "kube-proxy-68kl9" is "Ready"
	I1016 18:29:27.217132  245371 pod_ready.go:86] duration metric: took 400.735206ms for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.419792  245371 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816058  245371 pod_ready.go:94] pod "kube-scheduler-no-preload-808539" is "Ready"
	I1016 18:29:27.816084  245371 pod_ready.go:86] duration metric: took 396.261228ms for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816099  245371 pod_ready.go:40] duration metric: took 38.907119982s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:27.860942  245371 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:27.862530  245371 out.go:179] * Done! kubectl is now configured to use "no-preload-808539" cluster and "default" namespace by default
	I1016 18:29:28.379667  254209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:28.379756  254209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:28.379854  254209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:28.379919  254209 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:28.379960  254209 kubeadm.go:318] OS: Linux
	I1016 18:29:28.380039  254209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:28.380108  254209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:28.380162  254209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:28.380210  254209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:28.380249  254209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:28.380302  254209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:28.380342  254209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:28.380378  254209 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:28.380440  254209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:28.380523  254209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:28.380601  254209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:28.380687  254209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:28.382133  254209 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:28.382223  254209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:28.382325  254209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:28.382409  254209 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:28.382524  254209 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:28.382610  254209 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:28.382684  254209 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:28.382785  254209 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:28.382994  254209 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383094  254209 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:28.383267  254209 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383368  254209 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:28.383477  254209 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:28.383518  254209 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:28.383588  254209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:28.383656  254209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:28.383737  254209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:28.383814  254209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:28.383912  254209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:28.383990  254209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:28.384065  254209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:28.384119  254209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:28.385312  254209 out.go:252]   - Booting up control plane ...
	I1016 18:29:28.385390  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:28.385468  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:28.385537  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:28.385629  254209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:28.385708  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:28.385846  254209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:28.385944  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:28.385987  254209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:28.386112  254209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:28.386205  254209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:28.386257  254209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501903762s
	I1016 18:29:28.386370  254209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:28.386456  254209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1016 18:29:28.386534  254209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:28.386605  254209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:28.386709  254209 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.138506302s
	I1016 18:29:28.386833  254209 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.998459766s
	I1016 18:29:28.386943  254209 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001498967s
	I1016 18:29:28.387079  254209 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:28.387241  254209 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:28.387341  254209 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:28.387557  254209 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-523257 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:28.387622  254209 kubeadm.go:318] [bootstrap-token] Using token: wqx7bh.ga0ezwq7c18mbgbm
	I1016 18:29:28.388960  254209 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:28.389058  254209 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:28.389159  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:28.389377  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:28.389512  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:28.389640  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:28.389787  254209 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:28.389938  254209 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:28.389981  254209 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:28.390023  254209 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:28.390028  254209 kubeadm.go:318] 
	I1016 18:29:28.390074  254209 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:28.390080  254209 kubeadm.go:318] 
	I1016 18:29:28.390140  254209 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:28.390146  254209 kubeadm.go:318] 
	I1016 18:29:28.390170  254209 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:28.390217  254209 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:28.390266  254209 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:28.390275  254209 kubeadm.go:318] 
	I1016 18:29:28.390327  254209 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:28.390333  254209 kubeadm.go:318] 
	I1016 18:29:28.390378  254209 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:28.390395  254209 kubeadm.go:318] 
	I1016 18:29:28.390444  254209 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:28.390542  254209 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:28.390666  254209 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:28.390676  254209 kubeadm.go:318] 
	I1016 18:29:28.390772  254209 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:28.390842  254209 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:28.390851  254209 kubeadm.go:318] 
	I1016 18:29:28.390920  254209 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391011  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:28.391035  254209 kubeadm.go:318] 	--control-plane 
	I1016 18:29:28.391043  254209 kubeadm.go:318] 
	I1016 18:29:28.391127  254209 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:28.391140  254209 kubeadm.go:318] 
	I1016 18:29:28.391228  254209 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391331  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:28.391345  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:28.391351  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:28.392742  254209 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:25.836785  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:25.837228  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:25.837274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:25.837338  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:25.864224  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:25.864250  228782 cri.go:89] found id: ""
	I1016 18:29:25.864260  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:25.864307  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.868459  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:25.868525  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:25.894631  228782 cri.go:89] found id: ""
	I1016 18:29:25.894658  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.894671  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:25.894679  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:25.894750  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:25.926152  228782 cri.go:89] found id: ""
	I1016 18:29:25.926179  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.926190  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:25.926198  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:25.926251  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:25.963328  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:25.963355  228782 cri.go:89] found id: ""
	I1016 18:29:25.963365  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:25.963425  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.968500  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:25.968557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:26.000655  228782 cri.go:89] found id: ""
	I1016 18:29:26.000684  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.000693  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:26.000701  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:26.000796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:26.033474  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.033497  228782 cri.go:89] found id: ""
	I1016 18:29:26.033505  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:26.033570  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:26.038349  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:26.038413  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:26.069780  228782 cri.go:89] found id: ""
	I1016 18:29:26.069808  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.069818  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:26.069824  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:26.069882  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:26.103136  228782 cri.go:89] found id: ""
	I1016 18:29:26.103171  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.103183  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:26.103201  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:26.103215  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.139969  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:26.139999  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:26.208221  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:26.208254  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:26.244473  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:26.244505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:26.350643  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:26.350676  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:26.369275  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:26.369312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:26.442326  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:26.442349  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:26.442365  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:26.483134  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:26.483169  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.040764  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:29.041151  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:29.041199  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:29.041257  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1016 18:29:26.109979  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:28.608795  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	I1016 18:29:28.393724  254209 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:28.398316  254209 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:28.398334  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:28.412460  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:28.630658  254209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:28.630739  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:28.630750  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-523257 minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=default-k8s-diff-port-523257 minikube.k8s.io/primary=true
	I1016 18:29:28.644240  254209 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:28.721533  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.221909  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.721810  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.222262  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.721738  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.222945  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.722510  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.608496  249491 node_ready.go:49] node "embed-certs-063117" is "Ready"
	I1016 18:29:30.608520  249491 node_ready.go:38] duration metric: took 10.503114261s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:30.608533  249491 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:29:30.608583  249491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:29:30.621063  249491 api_server.go:72] duration metric: took 10.849240762s to wait for apiserver process to appear ...
	I1016 18:29:30.621089  249491 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:29:30.621109  249491 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:29:30.626152  249491 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:29:30.627107  249491 api_server.go:141] control plane version: v1.34.1
	I1016 18:29:30.627128  249491 api_server.go:131] duration metric: took 6.033168ms to wait for apiserver health ...
	I1016 18:29:30.627136  249491 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:29:30.630659  249491 system_pods.go:59] 8 kube-system pods found
	I1016 18:29:30.630699  249491 system_pods.go:61] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.630746  249491 system_pods.go:61] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.630759  249491 system_pods.go:61] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.630772  249491 system_pods.go:61] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.630916  249491 system_pods.go:61] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.630926  249491 system_pods.go:61] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.630937  249491 system_pods.go:61] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.630959  249491 system_pods.go:61] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.630971  249491 system_pods.go:74] duration metric: took 3.829293ms to wait for pod list to return data ...
	I1016 18:29:30.630985  249491 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:29:30.633438  249491 default_sa.go:45] found service account: "default"
	I1016 18:29:30.633459  249491 default_sa.go:55] duration metric: took 2.463926ms for default service account to be created ...
	I1016 18:29:30.633469  249491 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:29:30.637230  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.637270  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.637278  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.637286  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.637292  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.637299  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.637308  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.637313  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.637321  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.637342  249491 retry.go:31] will retry after 264.141768ms: missing components: kube-dns
	I1016 18:29:30.905515  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.905557  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.905566  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.905573  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.905578  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.905583  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.905586  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.905591  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.905599  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.905621  249491 retry.go:31] will retry after 272.815126ms: missing components: kube-dns
	I1016 18:29:31.182959  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:31.182996  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Running
	I1016 18:29:31.183004  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:31.183010  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:31.183016  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:31.183023  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:31.183028  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:31.183034  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:31.183038  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Running
	I1016 18:29:31.183048  249491 system_pods.go:126] duration metric: took 549.572251ms to wait for k8s-apps to be running ...
	I1016 18:29:31.183057  249491 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:29:31.183107  249491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:31.196951  249491 system_svc.go:56] duration metric: took 13.886426ms WaitForService to wait for kubelet
	I1016 18:29:31.196976  249491 kubeadm.go:586] duration metric: took 11.42515893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:31.196996  249491 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:29:31.200148  249491 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:29:31.200174  249491 node_conditions.go:123] node cpu capacity is 8
	I1016 18:29:31.200186  249491 node_conditions.go:105] duration metric: took 3.185275ms to run NodePressure ...
	I1016 18:29:31.200197  249491 start.go:241] waiting for startup goroutines ...
	I1016 18:29:31.200203  249491 start.go:246] waiting for cluster config update ...
	I1016 18:29:31.200216  249491 start.go:255] writing updated cluster config ...
	I1016 18:29:31.200464  249491 ssh_runner.go:195] Run: rm -f paused
	I1016 18:29:31.204547  249491 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:31.208677  249491 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.212900  249491 pod_ready.go:94] pod "coredns-66bc5c9577-v85b5" is "Ready"
	I1016 18:29:31.212920  249491 pod_ready.go:86] duration metric: took 4.216559ms for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.214804  249491 pod_ready.go:83] waiting for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.218157  249491 pod_ready.go:94] pod "etcd-embed-certs-063117" is "Ready"
	I1016 18:29:31.218176  249491 pod_ready.go:86] duration metric: took 3.355374ms for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.219965  249491 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.224645  249491 pod_ready.go:94] pod "kube-apiserver-embed-certs-063117" is "Ready"
	I1016 18:29:31.224665  249491 pod_ready.go:86] duration metric: took 4.684934ms for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.226498  249491 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.608777  249491 pod_ready.go:94] pod "kube-controller-manager-embed-certs-063117" is "Ready"
	I1016 18:29:31.608802  249491 pod_ready.go:86] duration metric: took 382.283573ms for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.809171  249491 pod_ready.go:83] waiting for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.209404  249491 pod_ready.go:94] pod "kube-proxy-rsvq2" is "Ready"
	I1016 18:29:32.209429  249491 pod_ready.go:86] duration metric: took 400.235447ms for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.410356  249491 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809170  249491 pod_ready.go:94] pod "kube-scheduler-embed-certs-063117" is "Ready"
	I1016 18:29:32.809199  249491 pod_ready.go:86] duration metric: took 398.804528ms for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809212  249491 pod_ready.go:40] duration metric: took 1.604631583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:32.863208  249491 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:32.865029  249491 out.go:179] * Done! kubectl is now configured to use "embed-certs-063117" cluster and "default" namespace by default
	I1016 18:29:32.222199  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:32.721921  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.221579  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.296163  254209 kubeadm.go:1113] duration metric: took 4.665491695s to wait for elevateKubeSystemPrivileges
	I1016 18:29:33.296194  254209 kubeadm.go:402] duration metric: took 16.35471992s to StartCluster
	I1016 18:29:33.296214  254209 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.296275  254209 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:33.298961  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.299346  254209 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:33.299369  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:33.299475  254209 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:33.299572  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:33.299578  254209 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299595  254209 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.299620  254209 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299628  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.299636  254209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523257"
	I1016 18:29:33.300012  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.300177  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.302171  254209 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:33.304470  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:33.332040  254209 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.332146  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.332598  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.336186  254209 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:33.337836  254209 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.337921  254209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:33.338014  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.370804  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.371205  254209 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.371228  254209 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:33.371286  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.396649  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.405998  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:33.480661  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:33.493270  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.508563  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.588784  254209 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:33.590519  254209 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523257" to be "Ready" ...
	I1016 18:29:33.809245  254209 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:29.070288  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.070318  228782 cri.go:89] found id: ""
	I1016 18:29:29.070328  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:29.070383  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.074419  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:29.074490  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:29.101845  228782 cri.go:89] found id: ""
	I1016 18:29:29.101875  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.101886  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:29.101894  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:29.101945  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:29.130198  228782 cri.go:89] found id: ""
	I1016 18:29:29.130243  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.130255  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:29.130267  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:29.130324  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:29.171097  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.171116  228782 cri.go:89] found id: ""
	I1016 18:29:29.171123  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:29.171166  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.175059  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:29.175114  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:29.204192  228782 cri.go:89] found id: ""
	I1016 18:29:29.204217  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.204224  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:29.204229  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:29.204278  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:29.231647  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.231672  228782 cri.go:89] found id: ""
	I1016 18:29:29.231681  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:29.231757  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.236497  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:29.236557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:29.266328  228782 cri.go:89] found id: ""
	I1016 18:29:29.266354  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.266365  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:29.266372  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:29.266431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:29.296904  228782 cri.go:89] found id: ""
	I1016 18:29:29.296926  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.296936  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:29.296946  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:29.296957  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:29.389410  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:29.389443  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:29.404894  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:29.404925  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:29.463298  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:29.463323  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:29.463342  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.497484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:29.497513  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.548374  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:29.548408  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.574914  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:29.574946  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:29.630476  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:29.630506  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.164804  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:32.165219  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:32.165273  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:32.165322  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:32.192921  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.192940  228782 cri.go:89] found id: ""
	I1016 18:29:32.192947  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:32.193009  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.197494  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:32.197566  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:32.226679  228782 cri.go:89] found id: ""
	I1016 18:29:32.226706  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.226732  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:32.226740  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:32.226802  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:32.256127  228782 cri.go:89] found id: ""
	I1016 18:29:32.256152  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.256162  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:32.256170  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:32.256231  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:32.286329  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.286351  228782 cri.go:89] found id: ""
	I1016 18:29:32.286361  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:32.286418  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.290615  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:32.290687  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:32.318965  228782 cri.go:89] found id: ""
	I1016 18:29:32.318989  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.318999  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:32.319007  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:32.319086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:32.349977  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.350001  228782 cri.go:89] found id: ""
	I1016 18:29:32.350011  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:32.350084  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.354512  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:32.354578  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:32.381776  228782 cri.go:89] found id: ""
	I1016 18:29:32.381805  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.381814  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:32.381822  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:32.381884  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:32.413298  228782 cri.go:89] found id: ""
	I1016 18:29:32.413324  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.413335  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:32.413347  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:32.413360  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:32.472097  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:32.472114  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:32.472127  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.505633  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:32.505661  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.555025  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:32.555072  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.585744  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:32.585777  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:32.644161  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:32.644194  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.676157  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:32.676182  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:32.772828  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:32.772860  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:33.810778  254209 addons.go:514] duration metric: took 511.307538ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:34.093650  254209 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-523257" context rescaled to 1 replicas
	W1016 18:29:35.593703  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:29:35.291809  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:35.292347  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:35.292397  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:35.292449  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:35.320203  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.320224  228782 cri.go:89] found id: ""
	I1016 18:29:35.320231  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:35.320276  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.324296  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:35.324356  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:35.351958  228782 cri.go:89] found id: ""
	I1016 18:29:35.351982  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.351990  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:35.352012  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:35.352071  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:35.382337  228782 cri.go:89] found id: ""
	I1016 18:29:35.382364  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.382375  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:35.382382  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:35.382436  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:35.409388  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.409406  228782 cri.go:89] found id: ""
	I1016 18:29:35.409413  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:35.409455  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.413485  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:35.413543  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:35.440778  228782 cri.go:89] found id: ""
	I1016 18:29:35.440804  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.440812  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:35.440820  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:35.440896  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:35.466161  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.466184  228782 cri.go:89] found id: ""
	I1016 18:29:35.466193  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:35.466246  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.470498  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:35.470557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:35.498773  228782 cri.go:89] found id: ""
	I1016 18:29:35.498794  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.498800  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:35.498805  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:35.498850  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:35.525923  228782 cri.go:89] found id: ""
	I1016 18:29:35.525947  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.525956  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:35.525982  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:35.526000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.559484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:35.559519  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.615011  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:35.615051  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.642652  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:35.642687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:35.704004  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:35.704038  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:35.736269  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:35.736298  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:35.825956  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:35.825994  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:35.841899  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:35.841935  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:35.898506  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.400113  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:38.400540  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:38.400594  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:38.400649  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:38.427645  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.427665  228782 cri.go:89] found id: ""
	I1016 18:29:38.427674  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:38.427732  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.431841  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:38.431910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:38.459141  228782 cri.go:89] found id: ""
	I1016 18:29:38.459165  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.459175  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:38.459182  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:38.459238  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:38.486994  228782 cri.go:89] found id: ""
	I1016 18:29:38.487021  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.487032  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:38.487039  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:38.487100  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:38.514487  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.514508  228782 cri.go:89] found id: ""
	I1016 18:29:38.514515  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:38.514564  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.518661  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:38.518736  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:38.546066  228782 cri.go:89] found id: ""
	I1016 18:29:38.546087  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.546095  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:38.546100  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:38.546154  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:38.574022  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.574039  228782 cri.go:89] found id: ""
	I1016 18:29:38.574045  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:38.574087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.578237  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:38.578307  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:38.607676  228782 cri.go:89] found id: ""
	I1016 18:29:38.607699  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.607706  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:38.607736  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:38.607796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:38.635578  228782 cri.go:89] found id: ""
	I1016 18:29:38.635604  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.635615  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:38.635625  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:38.635640  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:38.694675  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.694699  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:38.694738  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.728850  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:38.728879  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.780750  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:38.780780  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.809679  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:38.809705  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:38.863006  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:38.863035  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:38.894630  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:38.894657  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:38.990653  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:38.990687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 16 18:28:58 no-preload-808539 crio[568]: time="2025-10-16T18:28:58.778881576Z" level=info msg="Started container" PID=1732 containerID=a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper id=3a458974-9446-4c05-86b7-2995171829b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5ee5b2aa8b77ec951b387864d228a26a00ac68df1f4e30fb7783dc23e86aac
	Oct 16 18:28:59 no-preload-808539 crio[568]: time="2025-10-16T18:28:59.72863572Z" level=info msg="Removing container: dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6" id=972e6b6d-ad53-457e-bb0d-76cf2824fa46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:28:59 no-preload-808539 crio[568]: time="2025-10-16T18:28:59.738853283Z" level=info msg="Removed container dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=972e6b6d-ad53-457e-bb0d-76cf2824fa46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.655079539Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=578afea2-e0a0-45c2-a1ed-07083230f2cc name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.65607512Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1974088d-44b4-4513-aa31-6776c3a704b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.657255529Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=4f746d7b-6374-46c3-8bb4-1ebb853b4ccc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.657536256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.663424106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.663958934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.70202868Z" level=info msg="Created container 08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=4f746d7b-6374-46c3-8bb4-1ebb853b4ccc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.702749775Z" level=info msg="Starting container: 08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d" id=34430631-24db-482c-92b6-2f80fb2a0d7b name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.705061062Z" level=info msg="Started container" PID=1742 containerID=08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper id=34430631-24db-482c-92b6-2f80fb2a0d7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5ee5b2aa8b77ec951b387864d228a26a00ac68df1f4e30fb7783dc23e86aac
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.782226794Z" level=info msg="Removing container: a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e" id=8dfb8ac2-b9df-4c8f-8601-517a15db2fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.792843833Z" level=info msg="Removed container a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=8dfb8ac2-b9df-4c8f-8601-517a15db2fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.78612746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cefa2179-6537-4db6-b33c-1d852d2ed518 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.787364524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a296e5e2-ca88-450b-b241-573d039f3eac name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.788350833Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=30e21518-c032-4a57-9580-baf87b6c84cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.788626755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794648315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794881091Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/731c49cee117e97f19172c2bb3c09e6d98e754a58a737ffd8257bb7e87531534/merged/etc/passwd: no such file or directory"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794916274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/731c49cee117e97f19172c2bb3c09e6d98e754a58a737ffd8257bb7e87531534/merged/etc/group: no such file or directory"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.795233135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.833972796Z" level=info msg="Created container ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9: kube-system/storage-provisioner/storage-provisioner" id=30e21518-c032-4a57-9580-baf87b6c84cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.834661728Z" level=info msg="Starting container: ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9" id=cc304b0d-aa66-4de1-a3bc-4ca27f2c6683 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.837669216Z" level=info msg="Started container" PID=1756 containerID=ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9 description=kube-system/storage-provisioner/storage-provisioner id=cc304b0d-aa66-4de1-a3bc-4ca27f2c6683 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8db1dcd37a5e0bbf47c4d06d4bcb590260578eb9807867d33379d876807507
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ebf3196883c18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   3e8db1dcd37a5       storage-provisioner                          kube-system
	08876948c4f7d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   be5ee5b2aa8b7       dashboard-metrics-scraper-6ffb444bf9-xpk9f   kubernetes-dashboard
	91a77615ada58       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   a19a833e89ed2       kubernetes-dashboard-855c9754f9-j8f8d        kubernetes-dashboard
	3de7cf0205d7d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   b7041baf44561       coredns-66bc5c9577-ntqqg                     kube-system
	151b9fc5c5caa       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   a12056f23d26b       busybox                                      default
	a093902546acd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   3e8db1dcd37a5       storage-provisioner                          kube-system
	9af550e59feff       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   2940a42013797       kindnet-kxznd                                kube-system
	c0468f3a79d7d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   c7f7d7001858c       kube-proxy-68kl9                             kube-system
	916c3b6d66243       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   15a9927fee995       kube-scheduler-no-preload-808539             kube-system
	4f293fe8269d1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   1abb064121f2a       kube-apiserver-no-preload-808539             kube-system
	7181b04bfb82e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   4289d00c455b9       kube-controller-manager-no-preload-808539    kube-system
	36d3ec65570d3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   75e9fe117e64f       etcd-no-preload-808539                       kube-system
	
	
	==> coredns [3de7cf0205d7d6eeac5cc2e822d62c8b8946ba8f92cbf91e763dd4318fd7e3c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59304 - 21226 "HINFO IN 744875417056776112.4312268298637680400. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034905363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-808539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-808539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-808539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_27_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:27:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-808539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:29:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-808539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                738a1706-7fde-4f71-a519-e3178e828487
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-ntqqg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-808539                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-kxznd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-808539              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-808539     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-68kl9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-808539              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xpk9f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j8f8d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node no-preload-808539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node no-preload-808539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node no-preload-808539 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-808539 event: Registered Node no-preload-808539 in Controller
	  Normal  NodeReady                95s                kubelet          Node no-preload-808539 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-808539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-808539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-808539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node no-preload-808539 event: Registered Node no-preload-808539 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [36d3ec65570d3105d713c2d5a8f592c5757f5b797e08265d5e50fa232714f4ec] <==
	{"level":"warn","ts":"2025-10-16T18:28:46.657915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.673927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.680314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.686568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.692984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.700262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.706569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.715261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.723652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.736962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.743691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.751379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.758518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.766923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.776230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.791971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.803105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.810590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.818104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.824621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.838968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.842791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.850478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.857981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.911295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:29:42 up  1:12,  0 user,  load average: 3.64, 2.71, 1.75
	Linux no-preload-808539 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9af550e59feffaa80d88161ffa36ffd9b00a7f1c63f27efce7435d4fb3f0f71a] <==
	I1016 18:28:48.260656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:28:48.261092       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:28:48.261266       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:28:48.261286       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:28:48.261309       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:28:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:28:48.478382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:28:48.478412       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:28:48.478433       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:28:48.479391       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:28:48.779230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:28:48.779280       1 metrics.go:72] Registering metrics
	I1016 18:28:48.779387       1 controller.go:711] "Syncing nftables rules"
	I1016 18:28:58.477801       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:28:58.477870       1 main.go:301] handling current node
	I1016 18:29:08.478774       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:08.478806       1 main.go:301] handling current node
	I1016 18:29:18.478063       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:18.478126       1 main.go:301] handling current node
	I1016 18:29:28.478159       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:28.478196       1 main.go:301] handling current node
	I1016 18:29:38.486838       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:38.486876       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f293fe8269d1d295e9d15b52d72bb19e3d1f3c9099a4102dec127e207a05b13] <==
	I1016 18:28:47.392867       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:28:47.392920       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:28:47.392966       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:28:47.393207       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:28:47.393215       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:28:47.393220       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:28:47.393225       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:28:47.398037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 18:28:47.399101       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:28:47.407115       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:28:47.417490       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:28:47.417523       1 policy_source.go:240] refreshing policies
	I1016 18:28:47.428503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:28:47.630646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:28:47.647474       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:28:47.670510       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:28:47.699191       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:28:47.706651       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:28:47.750824       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.153.115"}
	I1016 18:28:47.764443       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.5.198"}
	I1016 18:28:48.295818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:28:50.794952       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:28:50.842880       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:28:51.291138       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:28:51.291147       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7181b04bfb82e037325297ecffa17ead24bea639b33b265693a70609af2e891c] <==
	I1016 18:28:50.738050       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:28:50.738158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 18:28:50.738258       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 18:28:50.738305       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:28:50.738316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:28:50.738325       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:28:50.738309       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:28:50.738362       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-808539"
	I1016 18:28:50.738420       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:28:50.739086       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:28:50.741633       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:28:50.742422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:28:50.743997       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:28:50.745243       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:28:50.745288       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:28:50.745321       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:28:50.745249       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:28:50.745328       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:28:50.745385       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:28:50.745483       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:28:50.748650       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:28:50.750323       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:28:50.755309       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:28:50.756568       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:28:50.759697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [c0468f3a79d7d838f56df1eb32a946b34b2c3ab791c04e2980dbd98bdf6559e9] <==
	I1016 18:28:48.065410       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:28:48.130752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:28:48.231551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:28:48.231590       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:28:48.231705       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:28:48.251543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:28:48.251609       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:28:48.257243       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:28:48.257776       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:28:48.257813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:28:48.261423       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:28:48.261436       1 config.go:200] "Starting service config controller"
	I1016 18:28:48.261446       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:28:48.261449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:28:48.261470       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:28:48.261488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:28:48.261535       1 config.go:309] "Starting node config controller"
	I1016 18:28:48.261544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:28:48.361605       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:28:48.361607       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:28:48.361653       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:28:48.361738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [916c3b6d662439d89a451d927be5cafe6a0fca42419d42bd59af6042bb15ceea] <==
	I1016 18:28:47.307972       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:28:48.119263       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:28:48.119288       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:28:48.124053       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 18:28:48.124284       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 18:28:48.124393       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.124446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.124448       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:28:48.124931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:28:48.125890       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:28:48.125929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:28:48.224639       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 18:28:48.224639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.225733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463553     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98611c84-133a-4ab8-992f-3f5889238b0e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j8f8d\" (UID: \"98611c84-133a-4ab8-992f-3f5889238b0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463626     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cbh\" (UniqueName: \"kubernetes.io/projected/1ecf1060-060b-41cd-a215-9ddf9b9e68d5-kube-api-access-z8cbh\") pod \"dashboard-metrics-scraper-6ffb444bf9-xpk9f\" (UID: \"1ecf1060-060b-41cd-a215-9ddf9b9e68d5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463653     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxmb\" (UniqueName: \"kubernetes.io/projected/98611c84-133a-4ab8-992f-3f5889238b0e-kube-api-access-cjxmb\") pod \"kubernetes-dashboard-855c9754f9-j8f8d\" (UID: \"98611c84-133a-4ab8-992f-3f5889238b0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463743     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1ecf1060-060b-41cd-a215-9ddf9b9e68d5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xpk9f\" (UID: \"1ecf1060-060b-41cd-a215-9ddf9b9e68d5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f"
	Oct 16 18:28:55 no-preload-808539 kubelet[711]: I1016 18:28:55.915945     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:28:56 no-preload-808539 kubelet[711]: I1016 18:28:56.751550     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d" podStartSLOduration=1.6293062740000002 podStartE2EDuration="5.751527639s" podCreationTimestamp="2025-10-16 18:28:51 +0000 UTC" firstStartedPulling="2025-10-16 18:28:51.702018847 +0000 UTC m=+7.152122961" lastFinishedPulling="2025-10-16 18:28:55.824240215 +0000 UTC m=+11.274344326" observedRunningTime="2025-10-16 18:28:56.750884026 +0000 UTC m=+12.200988154" watchObservedRunningTime="2025-10-16 18:28:56.751527639 +0000 UTC m=+12.201631770"
	Oct 16 18:28:58 no-preload-808539 kubelet[711]: I1016 18:28:58.720091     711 scope.go:117] "RemoveContainer" containerID="dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: I1016 18:28:59.727129     711 scope.go:117] "RemoveContainer" containerID="dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: I1016 18:28:59.727295     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: E1016 18:28:59.727478     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:00 no-preload-808539 kubelet[711]: I1016 18:29:00.732811     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:00 no-preload-808539 kubelet[711]: E1016 18:29:00.732995     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:06 no-preload-808539 kubelet[711]: I1016 18:29:06.881984     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:06 no-preload-808539 kubelet[711]: E1016 18:29:06.882229     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.654509     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.780870     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.781104     711 scope.go:117] "RemoveContainer" containerID="08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: E1016 18:29:17.781325     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:18 no-preload-808539 kubelet[711]: I1016 18:29:18.785516     711 scope.go:117] "RemoveContainer" containerID="a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2"
	Oct 16 18:29:26 no-preload-808539 kubelet[711]: I1016 18:29:26.881876     711 scope.go:117] "RemoveContainer" containerID="08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	Oct 16 18:29:26 no-preload-808539 kubelet[711]: E1016 18:29:26.882067     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:39 no-preload-808539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:29:39 no-preload-808539 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:29:39 no-preload-808539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:29:39 no-preload-808539 systemd[1]: kubelet.service: Consumed 1.804s CPU time.
	
	
	==> kubernetes-dashboard [91a77615ada5800866478c73b61ad9458c9aab68602263b4fbb76cbe49d2c275] <==
	2025/10/16 18:28:55 Using namespace: kubernetes-dashboard
	2025/10/16 18:28:55 Using in-cluster config to connect to apiserver
	2025/10/16 18:28:55 Using secret token for csrf signing
	2025/10/16 18:28:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:28:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:28:55 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:28:55 Generating JWE encryption key
	2025/10/16 18:28:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:28:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:28:56 Initializing JWE encryption key from synchronized object
	2025/10/16 18:28:56 Creating in-cluster Sidecar client
	2025/10/16 18:28:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:56 Serving insecurely on HTTP port: 9090
	2025/10/16 18:29:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:55 Starting overwatch
	
	
	==> storage-provisioner [a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2] <==
	I1016 18:28:48.035294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:29:18.038159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9] <==
	I1016 18:29:18.849677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:29:18.857821       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:29:18.857867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:29:18.860396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:22.315463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:26.576158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:30.174874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:33.229215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.251450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.255883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:36.256022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:29:36.256196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"403a8f30-1976-4add-8440-a3609b846a31", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4 became leader
	I1016 18:29:36.256222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4!
	W1016 18:29:36.258335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.262011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:36.356443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4!
	W1016 18:29:38.265018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:38.268836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.272189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.276793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.280775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.287969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-808539 -n no-preload-808539: exit status 2 (349.981912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-808539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-808539
helpers_test.go:243: (dbg) docker inspect no-preload-808539:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	        "Created": "2025-10-16T18:27:19.34518913Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 245577,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:28:38.24161274Z",
	            "FinishedAt": "2025-10-16T18:28:37.405919085Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/hosts",
	        "LogPath": "/var/lib/docker/containers/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674/ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674-json.log",
	        "Name": "/no-preload-808539",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-808539:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-808539",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ee665d228e598751e1638eb142a7879e4d9bbbf80b0201c8745cbc5c4da9a674",
	                "LowerDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/merged",
	                "UpperDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/diff",
	                "WorkDir": "/var/lib/docker/overlay2/868fea85c82dc716ed77eebcc797a288434c0c337e413bace60fdc41e29b2321/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-808539",
	                "Source": "/var/lib/docker/volumes/no-preload-808539/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-808539",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-808539",
	                "name.minikube.sigs.k8s.io": "no-preload-808539",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01cfbd7e3b6a4580dfa96c52130c4aa91cb0a438413e236ed53b2f26370660e1",
	            "SandboxKey": "/var/run/docker/netns/01cfbd7e3b6a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-808539": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:90:06:f8:1f:25",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "38dc5e7162482fea5b37cb1ee9d81ad023804ad94f7487798d7ddee0954e300e",
	                    "EndpointID": "11c5e5bc704a28b128dd8cb214ab5a4c51aedf7f59c06213c99194eadbf8d464",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-808539",
	                        "ee665d228e59"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539: exit status 2 (339.570152ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-808539 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-808539 logs -n 25: (1.127352691s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:29:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:29:07.040256  254209 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:29:07.040551  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040562  254209 out.go:374] Setting ErrFile to fd 2...
	I1016 18:29:07.040565  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040803  254209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:29:07.041325  254209 out.go:368] Setting JSON to false
	I1016 18:29:07.042806  254209 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4295,"bootTime":1760635052,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:29:07.042932  254209 start.go:141] virtualization: kvm guest
	I1016 18:29:07.045364  254209 out.go:179] * [default-k8s-diff-port-523257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:29:07.046957  254209 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:29:07.046958  254209 notify.go:220] Checking for updates...
	I1016 18:29:07.050966  254209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:29:07.052908  254209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:07.054502  254209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:29:07.055956  254209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:29:07.057344  254209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:29:07.059464  254209 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059605  254209 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059765  254209 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059863  254209 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:29:07.085980  254209 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:29:07.086152  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.152740  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.141947952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.152862  254209 docker.go:318] overlay module found
	I1016 18:29:07.154961  254209 out.go:179] * Using the docker driver based on user configuration
	I1016 18:29:07.156386  254209 start.go:305] selected driver: docker
	I1016 18:29:07.156405  254209 start.go:925] validating driver "docker" against <nil>
	I1016 18:29:07.156417  254209 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:29:07.157063  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.222394  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.211344644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.222535  254209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:29:07.222748  254209 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:07.224789  254209 out.go:179] * Using Docker driver with root privileges
	I1016 18:29:07.226432  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:07.226503  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:07.226522  254209 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:29:07.226597  254209 start.go:349] cluster config:
	{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:07.228189  254209 out.go:179] * Starting "default-k8s-diff-port-523257" primary control-plane node in "default-k8s-diff-port-523257" cluster
	I1016 18:29:07.229711  254209 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:29:07.231414  254209 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:29:07.232838  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.232890  254209 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:29:07.232901  254209 cache.go:58] Caching tarball of preloaded images
	I1016 18:29:07.232950  254209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:29:07.233007  254209 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:29:07.233023  254209 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:29:07.233110  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:07.233129  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json: {Name:mkc8f0a47ba498cd8655372776f58860c7a1a49d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:07.255362  254209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:29:07.255388  254209 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:29:07.255409  254209 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:29:07.255451  254209 start.go:360] acquireMachinesLock for default-k8s-diff-port-523257: {Name:mk0ef672dc84306ea126d15d9b249684df6a69ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:29:07.255579  254209 start.go:364] duration metric: took 109.249µs to acquireMachinesLock for "default-k8s-diff-port-523257"
	I1016 18:29:07.255609  254209 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:07.255702  254209 start.go:125] createHost starting for "" (driver="docker")
	W1016 18:29:05.418755  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:07.419105  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:04.081460  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:04.081500  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:06.598777  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:06.599234  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:06.599283  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:06.599337  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:06.632534  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:06.632559  228782 cri.go:89] found id: ""
	I1016 18:29:06.632566  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:06.632623  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.636735  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:06.636800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:06.670881  228782 cri.go:89] found id: ""
	I1016 18:29:06.670915  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.670928  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:06.670937  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:06.670990  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:06.701324  228782 cri.go:89] found id: ""
	I1016 18:29:06.701352  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.701362  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:06.701370  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:06.701431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:06.735895  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.735922  228782 cri.go:89] found id: ""
	I1016 18:29:06.735930  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:06.735980  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.741105  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:06.741178  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:06.774597  228782 cri.go:89] found id: ""
	I1016 18:29:06.774618  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.774625  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:06.774632  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:06.774674  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:06.806134  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.806153  228782 cri.go:89] found id: ""
	I1016 18:29:06.806163  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:06.806215  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.811555  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:06.811627  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:06.846430  228782 cri.go:89] found id: ""
	I1016 18:29:06.846456  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.846465  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:06.846472  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:06.846528  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:06.878395  228782 cri.go:89] found id: ""
	I1016 18:29:06.878419  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.878430  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:06.878440  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:06.878454  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.938432  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:06.938467  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.970056  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:06.970085  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:07.027971  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:07.028000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:07.064564  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:07.064596  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:07.164562  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:07.164594  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:07.185438  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:07.185470  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:07.260040  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:07.260063  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:07.260077  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:07.258815  254209 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:29:07.259101  254209 start.go:159] libmachine.API.Create for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:07.259145  254209 client.go:168] LocalClient.Create starting
	I1016 18:29:07.259324  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:29:07.259400  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259427  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.259512  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:29:07.259555  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259573  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.260104  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:29:07.281148  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:29:07.281225  254209 network_create.go:284] running [docker network inspect default-k8s-diff-port-523257] to gather additional debugging logs...
	I1016 18:29:07.281243  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257
	W1016 18:29:07.301649  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 returned with exit code 1
	I1016 18:29:07.301683  254209 network_create.go:287] error running [docker network inspect default-k8s-diff-port-523257]: docker network inspect default-k8s-diff-port-523257: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-523257 not found
	I1016 18:29:07.301701  254209 network_create.go:289] output of [docker network inspect default-k8s-diff-port-523257]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-523257 not found
	
	** /stderr **
	I1016 18:29:07.301822  254209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:07.322829  254209 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:29:07.323663  254209 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:29:07.324428  254209 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:29:07.324921  254209 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07ac2eb0982 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:2a:d5:21:5c:9c} reservation:<nil>}
	I1016 18:29:07.325701  254209 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea8b80}
	I1016 18:29:07.325766  254209 network_create.go:124] attempt to create docker network default-k8s-diff-port-523257 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 18:29:07.325819  254209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 default-k8s-diff-port-523257
	I1016 18:29:07.389443  254209 network_create.go:108] docker network default-k8s-diff-port-523257 192.168.85.0/24 created
	I1016 18:29:07.389474  254209 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-523257" container
	I1016 18:29:07.389534  254209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:29:07.408685  254209 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-523257 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:29:07.429641  254209 oci.go:103] Successfully created a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.429766  254209 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-523257-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --entrypoint /usr/bin/test -v default-k8s-diff-port-523257:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:29:07.867408  254209 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.867462  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.867483  254209 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:29:07.867554  254209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:29:11.718052  254209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.850427538s)
	I1016 18:29:11.718089  254209 kic.go:203] duration metric: took 3.850601984s to extract preloaded images to volume ...
	W1016 18:29:11.718202  254209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:29:11.718242  254209 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:29:11.718287  254209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:29:11.783561  254209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-523257 --name default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --network default-k8s-diff-port-523257 --ip 192.168.85.2 --volume default-k8s-diff-port-523257:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	W1016 18:29:09.920187  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:11.920840  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:09.798326  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:09.798815  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:09.798876  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:09.798935  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:09.834829  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:09.834862  228782 cri.go:89] found id: ""
	I1016 18:29:09.834871  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:09.834929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.840366  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:09.840444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:09.872774  228782 cri.go:89] found id: ""
	I1016 18:29:09.872802  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.872812  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:09.872819  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:09.872878  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:09.909210  228782 cri.go:89] found id: ""
	I1016 18:29:09.909236  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.909247  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:09.909255  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:09.909312  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:09.945086  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:09.945108  228782 cri.go:89] found id: ""
	I1016 18:29:09.945117  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:09.945174  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.950041  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:09.950103  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:09.987902  228782 cri.go:89] found id: ""
	I1016 18:29:09.987927  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.987938  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:09.987949  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:09.988003  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:10.021037  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:10.021074  228782 cri.go:89] found id: ""
	I1016 18:29:10.021082  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:10.021134  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:10.026004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:10.026077  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:10.055087  228782 cri.go:89] found id: ""
	I1016 18:29:10.055111  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.055121  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:10.055135  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:10.055193  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:10.085674  228782 cri.go:89] found id: ""
	I1016 18:29:10.085703  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.085737  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:10.085750  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:10.085763  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:10.164177  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:10.164213  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:10.199764  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:10.199797  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:10.318961  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:10.318998  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:10.347541  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:10.347582  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:10.426635  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:10.426658  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:10.426673  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:10.460893  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:10.460927  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:10.514361  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:10.514395  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.045784  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:13.046220  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:13.046274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:13.046330  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:13.079185  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.079212  228782 cri.go:89] found id: ""
	I1016 18:29:13.079222  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:13.079289  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.083978  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:13.084050  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:13.114350  228782 cri.go:89] found id: ""
	I1016 18:29:13.114374  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.114385  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:13.114392  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:13.114444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:13.141976  228782 cri.go:89] found id: ""
	I1016 18:29:13.142002  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.142010  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:13.142016  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:13.142086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:13.174818  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.174848  228782 cri.go:89] found id: ""
	I1016 18:29:13.174858  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:13.174909  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.179004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:13.179070  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:13.214403  228782 cri.go:89] found id: ""
	I1016 18:29:13.214431  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.214442  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:13.214449  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:13.214507  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:13.246810  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.246834  228782 cri.go:89] found id: ""
	I1016 18:29:13.246844  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:13.246902  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.251623  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:13.251685  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:13.283291  228782 cri.go:89] found id: ""
	I1016 18:29:13.283318  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.283329  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:13.283339  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:13.283388  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:13.311343  228782 cri.go:89] found id: ""
	I1016 18:29:13.311368  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.311376  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:13.311383  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:13.311396  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:13.368339  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:13.368377  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:13.398197  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:13.398227  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:13.511753  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:13.511788  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:13.529854  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:13.529890  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:13.602327  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:13.602347  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:13.602359  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.636600  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:13.636635  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.688431  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:13.688469  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:14.812495  249491 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:14.812565  249491 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:14.812651  249491 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:14.812697  249491 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:14.812750  249491 kubeadm.go:318] OS: Linux
	I1016 18:29:14.812798  249491 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:14.812846  249491 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:14.812885  249491 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:14.812952  249491 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:14.812998  249491 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:14.813044  249491 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:14.813153  249491 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:14.813231  249491 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:14.813325  249491 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:14.813441  249491 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:14.813562  249491 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:14.813642  249491 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:14.815445  249491 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:14.815539  249491 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:14.815602  249491 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:14.815663  249491 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:14.815743  249491 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:14.815797  249491 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:14.815883  249491 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:14.815954  249491 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:14.816076  249491 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816123  249491 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:14.816240  249491 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816345  249491 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:14.816434  249491 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:14.816488  249491 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:14.816537  249491 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:14.816611  249491 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:14.816701  249491 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:14.816787  249491 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:14.816885  249491 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:14.816956  249491 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:14.817026  249491 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:14.817091  249491 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:14.818496  249491 out.go:252]   - Booting up control plane ...
	I1016 18:29:14.818580  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:14.818643  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:14.818755  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:14.818887  249491 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:14.819010  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:14.819110  249491 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:14.819187  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:14.819224  249491 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:14.819345  249491 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:14.819458  249491 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:14.819519  249491 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500924512s
	I1016 18:29:14.819610  249491 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:14.819682  249491 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1016 18:29:14.819785  249491 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:14.819861  249491 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:14.819937  249491 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.311071654s
	I1016 18:29:14.819995  249491 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.104436473s
	I1016 18:29:14.820062  249491 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.00209408s
	I1016 18:29:14.820157  249491 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:14.820281  249491 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:14.820375  249491 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:14.820585  249491 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-063117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:14.820666  249491 kubeadm.go:318] [bootstrap-token] Using token: 5rsifa.smk486u4t69rbatb
	I1016 18:29:14.822434  249491 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:14.822560  249491 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:14.822656  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:14.822845  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:14.823060  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:14.823170  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:14.823249  249491 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:14.823359  249491 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:14.823399  249491 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:14.823440  249491 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:14.823446  249491 kubeadm.go:318] 
	I1016 18:29:14.823500  249491 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:14.823519  249491 kubeadm.go:318] 
	I1016 18:29:14.823599  249491 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:14.823606  249491 kubeadm.go:318] 
	I1016 18:29:14.823628  249491 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:14.823679  249491 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:14.823767  249491 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:14.823775  249491 kubeadm.go:318] 
	I1016 18:29:14.823844  249491 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:14.823859  249491 kubeadm.go:318] 
	I1016 18:29:14.823926  249491 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:14.823936  249491 kubeadm.go:318] 
	I1016 18:29:14.824017  249491 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:14.824127  249491 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:14.824285  249491 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:14.824304  249491 kubeadm.go:318] 
	I1016 18:29:14.824446  249491 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:14.824583  249491 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:14.824596  249491 kubeadm.go:318] 
	I1016 18:29:14.824739  249491 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.824843  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:14.824866  249491 kubeadm.go:318] 	--control-plane 
	I1016 18:29:14.824870  249491 kubeadm.go:318] 
	I1016 18:29:14.824963  249491 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:14.824974  249491 kubeadm.go:318] 
	I1016 18:29:14.825046  249491 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.825152  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:14.825162  249491 cni.go:84] Creating CNI manager for ""
	I1016 18:29:14.825169  249491 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:14.826898  249491 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:12.063356  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Running}}
	I1016 18:29:12.082378  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.101794  254209 cli_runner.go:164] Run: docker exec default-k8s-diff-port-523257 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:29:12.150828  254209 oci.go:144] the created container "default-k8s-diff-port-523257" has a running status.
	I1016 18:29:12.150862  254209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa...
	I1016 18:29:12.360966  254209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:29:12.395477  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.421296  254209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:29:12.421318  254209 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-523257 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:29:12.475647  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.500605  254209 machine.go:93] provisionDockerMachine start ...
	I1016 18:29:12.500741  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.520832  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.521147  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.521169  254209 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:29:12.668259  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.668290  254209 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523257"
	I1016 18:29:12.668359  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.690556  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.690997  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.691041  254209 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523257 && echo "default-k8s-diff-port-523257" | sudo tee /etc/hostname
	I1016 18:29:12.853318  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.853397  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.874368  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.875979  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.876032  254209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523257/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:29:13.023166  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:29:13.023197  254209 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:29:13.023247  254209 ubuntu.go:190] setting up certificates
	I1016 18:29:13.023261  254209 provision.go:84] configureAuth start
	I1016 18:29:13.023324  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.044297  254209 provision.go:143] copyHostCerts
	I1016 18:29:13.044377  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:29:13.044387  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:29:13.044480  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:29:13.044612  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:29:13.044620  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:29:13.044665  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:29:13.044833  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:29:13.044854  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:29:13.044899  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:29:13.044986  254209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523257 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-523257 localhost minikube]
	I1016 18:29:13.322042  254209 provision.go:177] copyRemoteCerts
	I1016 18:29:13.322098  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:29:13.322130  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.341345  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.443517  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:29:13.466314  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:29:13.488307  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 18:29:13.510900  254209 provision.go:87] duration metric: took 487.621457ms to configureAuth
	I1016 18:29:13.510932  254209 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:29:13.511156  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:13.511275  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.533416  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:13.533709  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:13.533754  254209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:29:13.799038  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:29:13.799068  254209 machine.go:96] duration metric: took 1.2984414s to provisionDockerMachine
	I1016 18:29:13.799083  254209 client.go:171] duration metric: took 6.539927602s to LocalClient.Create
	I1016 18:29:13.799111  254209 start.go:167] duration metric: took 6.540012376s to libmachine.API.Create "default-k8s-diff-port-523257"
	I1016 18:29:13.799126  254209 start.go:293] postStartSetup for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:13.799140  254209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:29:13.799211  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:29:13.799291  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.819622  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.924749  254209 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:29:13.928900  254209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:29:13.928949  254209 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:29:13.928962  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:29:13.929014  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:29:13.929153  254209 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:29:13.929270  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:29:13.938068  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:13.959350  254209 start.go:296] duration metric: took 160.208327ms for postStartSetup
	I1016 18:29:13.959772  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.981564  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:13.981929  254209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:29:13.981986  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.002862  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.105028  254209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:29:14.109906  254209 start.go:128] duration metric: took 6.854190815s to createHost
	I1016 18:29:14.109928  254209 start.go:83] releasing machines lock for "default-k8s-diff-port-523257", held for 6.854337757s
	I1016 18:29:14.109985  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:14.129342  254209 ssh_runner.go:195] Run: cat /version.json
	I1016 18:29:14.129364  254209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:29:14.129388  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.129427  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.148145  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.148510  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.301264  254209 ssh_runner.go:195] Run: systemctl --version
	I1016 18:29:14.308012  254209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:29:14.343595  254209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:29:14.348610  254209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:29:14.348680  254209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:29:14.374585  254209 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:29:14.374606  254209 start.go:495] detecting cgroup driver to use...
	I1016 18:29:14.374641  254209 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:29:14.374699  254209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:29:14.390967  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:29:14.404114  254209 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:29:14.404173  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:29:14.423858  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:29:14.443353  254209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:29:14.528065  254209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:29:14.616017  254209 docker.go:234] disabling docker service ...
	I1016 18:29:14.616093  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:29:14.636286  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:29:14.649917  254209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:29:14.738496  254209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:29:14.830481  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:29:14.844213  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:29:14.860041  254209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:29:14.860111  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.871530  254209 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:29:14.871599  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.882155  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.891583  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.901751  254209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:29:14.911126  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.923235  254209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.940508  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.951261  254209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:29:14.961600  254209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:29:14.969949  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.065750  254209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:29:15.196909  254209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:29:15.197013  254209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:29:15.201701  254209 start.go:563] Will wait 60s for crictl version
	I1016 18:29:15.201777  254209 ssh_runner.go:195] Run: which crictl
	I1016 18:29:15.205695  254209 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:29:15.235561  254209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:29:15.235649  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.265880  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.296467  254209 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:29:15.297746  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:15.315570  254209 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 18:29:15.319846  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.330320  254209 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:29:15.330442  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:15.330496  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.362598  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.362621  254209 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:29:15.362681  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.388591  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.388610  254209 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:29:15.388617  254209 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1016 18:29:15.388687  254209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:29:15.388767  254209 ssh_runner.go:195] Run: crio config
	I1016 18:29:15.438126  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:15.438153  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:15.438169  254209 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:29:15.438189  254209 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523257 NodeName:default-k8s-diff-port-523257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:29:15.438304  254209 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:29:15.438360  254209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:29:15.446851  254209 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:29:15.446904  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:29:15.455376  254209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 18:29:15.468422  254209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:29:15.485061  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1016 18:29:15.499028  254209 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:29:15.502992  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.514119  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.600483  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:15.628358  254209 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257 for IP: 192.168.85.2
	I1016 18:29:15.628376  254209 certs.go:195] generating shared ca certs ...
	I1016 18:29:15.628396  254209 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.628509  254209 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:29:15.628562  254209 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:29:15.628573  254209 certs.go:257] generating profile certs ...
	I1016 18:29:15.628628  254209 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key
	I1016 18:29:15.628653  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt with IP's: []
	I1016 18:29:15.968981  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt ...
	I1016 18:29:15.969015  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt: {Name:mkc48781ddaf69d7e01ca677e4849b4caaee56c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969236  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key ...
	I1016 18:29:15.969256  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key: {Name:mkc621b8b4bfad359a056391feef8110384c6c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969390  254209 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c
	I1016 18:29:15.969417  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 18:29:16.391278  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c ...
	I1016 18:29:16.391304  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c: {Name:mk6cc283b84aa2fe24d23bc336c141b44112e826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391464  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c ...
	I1016 18:29:16.391483  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c: {Name:mkcaa57ee51fbf6de8c055b9c377d12f3a0aabf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391560  254209 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt
	I1016 18:29:16.391667  254209 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key
	I1016 18:29:16.391772  254209 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key
	I1016 18:29:16.391791  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt with IP's: []
	I1016 18:29:16.512660  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt ...
	I1016 18:29:16.512692  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt: {Name:mk2207d19f2814a793ac863fddc556c919eb7e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.512893  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key ...
	I1016 18:29:16.512912  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key: {Name:mk634f24088d880b43b87026568c66491c8f3f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.513157  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:29:16.513208  254209 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:29:16.513224  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:29:16.513258  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:29:16.513299  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:29:16.513332  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:29:16.513390  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:16.514000  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:29:16.534467  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:29:16.553911  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:29:16.572888  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:29:16.593316  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:29:16.613396  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:29:16.633847  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:29:16.652859  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:29:16.671301  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:29:16.692139  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:29:16.711854  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:29:16.733100  254209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:29:16.748870  254209 ssh_runner.go:195] Run: openssl version
	I1016 18:29:16.756698  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:29:16.765852  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770890  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770951  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.809579  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:29:16.818448  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:29:16.828572  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833466  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833518  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.869942  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:29:16.879161  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:29:16.888390  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892672  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892743  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.928324  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:29:16.937883  254209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:29:16.941427  254209 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:29:16.941477  254209 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:16.941533  254209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:29:16.941590  254209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:29:16.969823  254209 cri.go:89] found id: ""
	I1016 18:29:16.969879  254209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:29:16.978105  254209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:29:16.986454  254209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:29:16.986509  254209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:29:16.994659  254209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:29:16.994677  254209 kubeadm.go:157] found existing configuration files:
	
	I1016 18:29:16.994734  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1016 18:29:17.002515  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:29:17.002569  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:29:17.010005  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1016 18:29:17.017762  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:29:17.017809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:29:17.025281  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1016 18:29:17.033745  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:29:17.033809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	W1016 18:29:14.418032  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:16.918331  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:16.216787  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:16.217184  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:16.217232  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:16.217290  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:16.260046  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.260069  228782 cri.go:89] found id: ""
	I1016 18:29:16.260081  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:16.260138  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.264404  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:16.264461  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:16.292812  228782 cri.go:89] found id: ""
	I1016 18:29:16.292840  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.292849  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:16.292857  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:16.292916  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:16.320501  228782 cri.go:89] found id: ""
	I1016 18:29:16.320525  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.320537  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:16.320543  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:16.320601  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:16.349176  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:16.349201  228782 cri.go:89] found id: ""
	I1016 18:29:16.349211  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:16.349261  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.353478  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:16.353557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:16.381526  228782 cri.go:89] found id: ""
	I1016 18:29:16.381551  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.381560  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:16.381566  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:16.381622  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:16.410669  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.410688  228782 cri.go:89] found id: ""
	I1016 18:29:16.410698  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:16.410766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.415132  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:16.415201  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:16.444976  228782 cri.go:89] found id: ""
	I1016 18:29:16.445004  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.445015  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:16.445023  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:16.445079  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:16.476137  228782 cri.go:89] found id: ""
	I1016 18:29:16.476164  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.476174  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:16.476185  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:16.476198  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.507953  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:16.507978  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:16.570051  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:16.570092  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:16.603032  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:16.603070  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:16.693780  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:16.693814  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:16.710844  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:16.710881  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:16.773893  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:16.773917  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:16.773931  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.807340  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:16.807368  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:14.828263  249491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:14.833625  249491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:14.833646  249491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:14.848089  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:15.084417  249491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:15.084527  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.084544  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-063117 minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=embed-certs-063117 minikube.k8s.io/primary=true
	I1016 18:29:15.180501  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.180512  249491 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:15.681132  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.180980  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.681259  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.181627  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.681148  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.180852  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.681519  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.180964  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.681224  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.769979  249491 kubeadm.go:1113] duration metric: took 4.685530547s to wait for elevateKubeSystemPrivileges
	I1016 18:29:19.770014  249491 kubeadm.go:402] duration metric: took 18.251827782s to StartCluster
	I1016 18:29:19.770034  249491 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.770128  249491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:19.771546  249491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.771780  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:19.771795  249491 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:19.771842  249491 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:19.771949  249491 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-063117"
	I1016 18:29:19.771958  249491 addons.go:69] Setting default-storageclass=true in profile "embed-certs-063117"
	I1016 18:29:19.771971  249491 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-063117"
	I1016 18:29:19.771979  249491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-063117"
	I1016 18:29:19.771979  249491 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:19.772007  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.772413  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.772558  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.776284  249491 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:19.777682  249491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:19.800564  249491 addons.go:238] Setting addon default-storageclass=true in "embed-certs-063117"
	I1016 18:29:19.800668  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.801165  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.803130  249491 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:19.804678  249491 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.804699  249491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:19.804856  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.826115  249491 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:19.826138  249491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:19.826207  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.832338  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.861747  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.882221  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:19.965940  249491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:19.969094  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.987077  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:20.101590  249491 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:20.105376  249491 node_ready.go:35] waiting up to 6m0s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:20.328792  249491 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:17.041611  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1016 18:29:17.049911  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:29:17.049971  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:29:17.058089  254209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:29:17.137219  254209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:29:17.203085  254209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1016 18:29:19.418382  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:21.918282  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:19.359592  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:19.360042  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:19.360098  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:19.360144  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:19.393040  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:19.393066  228782 cri.go:89] found id: ""
	I1016 18:29:19.393076  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:19.393131  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.397814  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:19.397881  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:19.427286  228782 cri.go:89] found id: ""
	I1016 18:29:19.427314  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.427322  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:19.427327  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:19.427375  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:19.462227  228782 cri.go:89] found id: ""
	I1016 18:29:19.462266  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.462279  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:19.462287  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:19.462348  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:19.496749  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.496774  228782 cri.go:89] found id: ""
	I1016 18:29:19.496783  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:19.496840  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.501521  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:19.501595  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:19.529247  228782 cri.go:89] found id: ""
	I1016 18:29:19.529274  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.529289  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:19.529296  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:19.529359  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:19.564781  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.564804  228782 cri.go:89] found id: ""
	I1016 18:29:19.564814  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:19.564929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.570532  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:19.570606  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:19.604855  228782 cri.go:89] found id: ""
	I1016 18:29:19.604883  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.604893  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:19.604901  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:19.604953  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:19.638992  228782 cri.go:89] found id: ""
	I1016 18:29:19.639022  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.639034  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:19.639045  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:19.639061  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.701460  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:19.701505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.742847  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:19.742874  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:19.829432  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:19.829906  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:19.877323  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:19.877363  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:20.013993  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:20.014026  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:20.033495  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:20.033528  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:20.125927  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:20.125955  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:20.125979  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.676779  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:22.677325  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:22.677386  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:22.677441  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:22.704967  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.704992  228782 cri.go:89] found id: ""
	I1016 18:29:22.705001  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:22.705054  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.709172  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:22.709227  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:22.737460  228782 cri.go:89] found id: ""
	I1016 18:29:22.737488  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.737497  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:22.737502  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:22.737557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:22.765144  228782 cri.go:89] found id: ""
	I1016 18:29:22.765167  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.765174  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:22.765182  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:22.765234  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:22.794804  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:22.794830  228782 cri.go:89] found id: ""
	I1016 18:29:22.794842  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:22.794896  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.799171  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:22.799236  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:22.826223  228782 cri.go:89] found id: ""
	I1016 18:29:22.826245  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.826254  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:22.826262  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:22.826320  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:22.853663  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:22.853687  228782 cri.go:89] found id: ""
	I1016 18:29:22.853697  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:22.853766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.857917  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:22.857976  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:22.886082  228782 cri.go:89] found id: ""
	I1016 18:29:22.886104  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.886111  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:22.886116  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:22.886161  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:22.914756  228782 cri.go:89] found id: ""
	I1016 18:29:22.914785  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.914795  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:22.914806  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:22.914819  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:22.948094  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:22.948123  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:23.063153  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:23.063191  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:23.086210  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:23.086246  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:23.158625  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:23.158644  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:23.158655  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:23.196125  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:23.196164  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:23.249568  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:23.249603  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:23.278700  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:23.278755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:20.330231  249491 addons.go:514] duration metric: took 558.387286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:20.605751  249491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-063117" context rescaled to 1 replicas
	W1016 18:29:22.108077  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:24.108885  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:23.920030  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:26.418984  245371 pod_ready.go:94] pod "coredns-66bc5c9577-ntqqg" is "Ready"
	I1016 18:29:26.419015  245371 pod_ready.go:86] duration metric: took 37.506349558s for pod "coredns-66bc5c9577-ntqqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.421830  245371 pod_ready.go:83] waiting for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.426444  245371 pod_ready.go:94] pod "etcd-no-preload-808539" is "Ready"
	I1016 18:29:26.426468  245371 pod_ready.go:86] duration metric: took 4.611842ms for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.428754  245371 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.433020  245371 pod_ready.go:94] pod "kube-apiserver-no-preload-808539" is "Ready"
	I1016 18:29:26.433042  245371 pod_ready.go:86] duration metric: took 4.265191ms for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.435232  245371 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.616325  245371 pod_ready.go:94] pod "kube-controller-manager-no-preload-808539" is "Ready"
	I1016 18:29:26.616358  245371 pod_ready.go:86] duration metric: took 181.098764ms for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.816373  245371 pod_ready.go:83] waiting for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.217098  245371 pod_ready.go:94] pod "kube-proxy-68kl9" is "Ready"
	I1016 18:29:27.217132  245371 pod_ready.go:86] duration metric: took 400.735206ms for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.419792  245371 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816058  245371 pod_ready.go:94] pod "kube-scheduler-no-preload-808539" is "Ready"
	I1016 18:29:27.816084  245371 pod_ready.go:86] duration metric: took 396.261228ms for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816099  245371 pod_ready.go:40] duration metric: took 38.907119982s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:27.860942  245371 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:27.862530  245371 out.go:179] * Done! kubectl is now configured to use "no-preload-808539" cluster and "default" namespace by default
	I1016 18:29:28.379667  254209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:28.379756  254209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:28.379854  254209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:28.379919  254209 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:28.379960  254209 kubeadm.go:318] OS: Linux
	I1016 18:29:28.380039  254209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:28.380108  254209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:28.380162  254209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:28.380210  254209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:28.380249  254209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:28.380302  254209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:28.380342  254209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:28.380378  254209 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:28.380440  254209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:28.380523  254209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:28.380601  254209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:28.380687  254209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:28.382133  254209 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:28.382223  254209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:28.382325  254209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:28.382409  254209 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:28.382524  254209 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:28.382610  254209 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:28.382684  254209 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:28.382785  254209 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:28.382994  254209 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383094  254209 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:28.383267  254209 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383368  254209 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:28.383477  254209 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:28.383518  254209 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:28.383588  254209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:28.383656  254209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:28.383737  254209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:28.383814  254209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:28.383912  254209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:28.383990  254209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:28.384065  254209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:28.384119  254209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:28.385312  254209 out.go:252]   - Booting up control plane ...
	I1016 18:29:28.385390  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:28.385468  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:28.385537  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:28.385629  254209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:28.385708  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:28.385846  254209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:28.385944  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:28.385987  254209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:28.386112  254209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:28.386205  254209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:28.386257  254209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501903762s
	I1016 18:29:28.386370  254209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:28.386456  254209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1016 18:29:28.386534  254209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:28.386605  254209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:28.386709  254209 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.138506302s
	I1016 18:29:28.386833  254209 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.998459766s
	I1016 18:29:28.386943  254209 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001498967s
	I1016 18:29:28.387079  254209 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:28.387241  254209 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:28.387341  254209 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:28.387557  254209 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-523257 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:28.387622  254209 kubeadm.go:318] [bootstrap-token] Using token: wqx7bh.ga0ezwq7c18mbgbm
	I1016 18:29:28.388960  254209 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:28.389058  254209 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:28.389159  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:28.389377  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:28.389512  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:28.389640  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:28.389787  254209 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:28.389938  254209 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:28.389981  254209 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:28.390023  254209 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:28.390028  254209 kubeadm.go:318] 
	I1016 18:29:28.390074  254209 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:28.390080  254209 kubeadm.go:318] 
	I1016 18:29:28.390140  254209 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:28.390146  254209 kubeadm.go:318] 
	I1016 18:29:28.390170  254209 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:28.390217  254209 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:28.390266  254209 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:28.390275  254209 kubeadm.go:318] 
	I1016 18:29:28.390327  254209 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:28.390333  254209 kubeadm.go:318] 
	I1016 18:29:28.390378  254209 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:28.390395  254209 kubeadm.go:318] 
	I1016 18:29:28.390444  254209 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:28.390542  254209 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:28.390666  254209 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:28.390676  254209 kubeadm.go:318] 
	I1016 18:29:28.390772  254209 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:28.390842  254209 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:28.390851  254209 kubeadm.go:318] 
	I1016 18:29:28.390920  254209 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391011  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:28.391035  254209 kubeadm.go:318] 	--control-plane 
	I1016 18:29:28.391043  254209 kubeadm.go:318] 
	I1016 18:29:28.391127  254209 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:28.391140  254209 kubeadm.go:318] 
	I1016 18:29:28.391228  254209 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391331  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:28.391345  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:28.391351  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:28.392742  254209 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:25.836785  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:25.837228  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:25.837274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:25.837338  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:25.864224  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:25.864250  228782 cri.go:89] found id: ""
	I1016 18:29:25.864260  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:25.864307  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.868459  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:25.868525  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:25.894631  228782 cri.go:89] found id: ""
	I1016 18:29:25.894658  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.894671  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:25.894679  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:25.894750  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:25.926152  228782 cri.go:89] found id: ""
	I1016 18:29:25.926179  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.926190  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:25.926198  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:25.926251  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:25.963328  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:25.963355  228782 cri.go:89] found id: ""
	I1016 18:29:25.963365  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:25.963425  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.968500  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:25.968557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:26.000655  228782 cri.go:89] found id: ""
	I1016 18:29:26.000684  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.000693  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:26.000701  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:26.000796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:26.033474  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.033497  228782 cri.go:89] found id: ""
	I1016 18:29:26.033505  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:26.033570  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:26.038349  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:26.038413  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:26.069780  228782 cri.go:89] found id: ""
	I1016 18:29:26.069808  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.069818  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:26.069824  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:26.069882  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:26.103136  228782 cri.go:89] found id: ""
	I1016 18:29:26.103171  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.103183  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:26.103201  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:26.103215  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.139969  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:26.139999  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:26.208221  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:26.208254  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:26.244473  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:26.244505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:26.350643  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:26.350676  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:26.369275  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:26.369312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:26.442326  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:26.442349  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:26.442365  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:26.483134  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:26.483169  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.040764  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:29.041151  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:29.041199  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:29.041257  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1016 18:29:26.109979  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:28.608795  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	I1016 18:29:28.393724  254209 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:28.398316  254209 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:28.398334  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:28.412460  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:28.630658  254209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:28.630739  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:28.630750  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-523257 minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=default-k8s-diff-port-523257 minikube.k8s.io/primary=true
	I1016 18:29:28.644240  254209 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:28.721533  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.221909  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.721810  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.222262  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.721738  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.222945  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.722510  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.608496  249491 node_ready.go:49] node "embed-certs-063117" is "Ready"
	I1016 18:29:30.608520  249491 node_ready.go:38] duration metric: took 10.503114261s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:30.608533  249491 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:29:30.608583  249491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:29:30.621063  249491 api_server.go:72] duration metric: took 10.849240762s to wait for apiserver process to appear ...
	I1016 18:29:30.621089  249491 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:29:30.621109  249491 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:29:30.626152  249491 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:29:30.627107  249491 api_server.go:141] control plane version: v1.34.1
	I1016 18:29:30.627128  249491 api_server.go:131] duration metric: took 6.033168ms to wait for apiserver health ...
	I1016 18:29:30.627136  249491 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:29:30.630659  249491 system_pods.go:59] 8 kube-system pods found
	I1016 18:29:30.630699  249491 system_pods.go:61] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.630746  249491 system_pods.go:61] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.630759  249491 system_pods.go:61] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.630772  249491 system_pods.go:61] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.630916  249491 system_pods.go:61] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.630926  249491 system_pods.go:61] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.630937  249491 system_pods.go:61] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.630959  249491 system_pods.go:61] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.630971  249491 system_pods.go:74] duration metric: took 3.829293ms to wait for pod list to return data ...
	I1016 18:29:30.630985  249491 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:29:30.633438  249491 default_sa.go:45] found service account: "default"
	I1016 18:29:30.633459  249491 default_sa.go:55] duration metric: took 2.463926ms for default service account to be created ...
	I1016 18:29:30.633469  249491 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:29:30.637230  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.637270  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.637278  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.637286  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.637292  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.637299  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.637308  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.637313  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.637321  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.637342  249491 retry.go:31] will retry after 264.141768ms: missing components: kube-dns
	I1016 18:29:30.905515  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.905557  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.905566  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.905573  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.905578  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.905583  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.905586  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.905591  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.905599  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.905621  249491 retry.go:31] will retry after 272.815126ms: missing components: kube-dns
	I1016 18:29:31.182959  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:31.182996  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Running
	I1016 18:29:31.183004  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:31.183010  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:31.183016  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:31.183023  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:31.183028  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:31.183034  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:31.183038  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Running
	I1016 18:29:31.183048  249491 system_pods.go:126] duration metric: took 549.572251ms to wait for k8s-apps to be running ...
	I1016 18:29:31.183057  249491 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:29:31.183107  249491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:31.196951  249491 system_svc.go:56] duration metric: took 13.886426ms WaitForService to wait for kubelet
	I1016 18:29:31.196976  249491 kubeadm.go:586] duration metric: took 11.42515893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:31.196996  249491 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:29:31.200148  249491 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:29:31.200174  249491 node_conditions.go:123] node cpu capacity is 8
	I1016 18:29:31.200186  249491 node_conditions.go:105] duration metric: took 3.185275ms to run NodePressure ...
	I1016 18:29:31.200197  249491 start.go:241] waiting for startup goroutines ...
	I1016 18:29:31.200203  249491 start.go:246] waiting for cluster config update ...
	I1016 18:29:31.200216  249491 start.go:255] writing updated cluster config ...
	I1016 18:29:31.200464  249491 ssh_runner.go:195] Run: rm -f paused
	I1016 18:29:31.204547  249491 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:31.208677  249491 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.212900  249491 pod_ready.go:94] pod "coredns-66bc5c9577-v85b5" is "Ready"
	I1016 18:29:31.212920  249491 pod_ready.go:86] duration metric: took 4.216559ms for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.214804  249491 pod_ready.go:83] waiting for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.218157  249491 pod_ready.go:94] pod "etcd-embed-certs-063117" is "Ready"
	I1016 18:29:31.218176  249491 pod_ready.go:86] duration metric: took 3.355374ms for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.219965  249491 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.224645  249491 pod_ready.go:94] pod "kube-apiserver-embed-certs-063117" is "Ready"
	I1016 18:29:31.224665  249491 pod_ready.go:86] duration metric: took 4.684934ms for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.226498  249491 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.608777  249491 pod_ready.go:94] pod "kube-controller-manager-embed-certs-063117" is "Ready"
	I1016 18:29:31.608802  249491 pod_ready.go:86] duration metric: took 382.283573ms for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.809171  249491 pod_ready.go:83] waiting for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.209404  249491 pod_ready.go:94] pod "kube-proxy-rsvq2" is "Ready"
	I1016 18:29:32.209429  249491 pod_ready.go:86] duration metric: took 400.235447ms for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.410356  249491 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809170  249491 pod_ready.go:94] pod "kube-scheduler-embed-certs-063117" is "Ready"
	I1016 18:29:32.809199  249491 pod_ready.go:86] duration metric: took 398.804528ms for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809212  249491 pod_ready.go:40] duration metric: took 1.604631583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:32.863208  249491 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:32.865029  249491 out.go:179] * Done! kubectl is now configured to use "embed-certs-063117" cluster and "default" namespace by default
	I1016 18:29:32.222199  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:32.721921  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.221579  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.296163  254209 kubeadm.go:1113] duration metric: took 4.665491695s to wait for elevateKubeSystemPrivileges
	I1016 18:29:33.296194  254209 kubeadm.go:402] duration metric: took 16.35471992s to StartCluster
	I1016 18:29:33.296214  254209 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.296275  254209 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:33.298961  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.299346  254209 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:33.299369  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:33.299475  254209 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:33.299572  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:33.299578  254209 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299595  254209 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.299620  254209 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299628  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.299636  254209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523257"
	I1016 18:29:33.300012  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.300177  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.302171  254209 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:33.304470  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:33.332040  254209 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.332146  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.332598  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.336186  254209 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:33.337836  254209 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.337921  254209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:33.338014  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.370804  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.371205  254209 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.371228  254209 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:33.371286  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.396649  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.405998  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:33.480661  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:33.493270  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.508563  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.588784  254209 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:33.590519  254209 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523257" to be "Ready" ...
	I1016 18:29:33.809245  254209 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:29.070288  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.070318  228782 cri.go:89] found id: ""
	I1016 18:29:29.070328  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:29.070383  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.074419  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:29.074490  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:29.101845  228782 cri.go:89] found id: ""
	I1016 18:29:29.101875  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.101886  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:29.101894  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:29.101945  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:29.130198  228782 cri.go:89] found id: ""
	I1016 18:29:29.130243  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.130255  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:29.130267  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:29.130324  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:29.171097  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.171116  228782 cri.go:89] found id: ""
	I1016 18:29:29.171123  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:29.171166  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.175059  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:29.175114  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:29.204192  228782 cri.go:89] found id: ""
	I1016 18:29:29.204217  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.204224  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:29.204229  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:29.204278  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:29.231647  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.231672  228782 cri.go:89] found id: ""
	I1016 18:29:29.231681  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:29.231757  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.236497  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:29.236557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:29.266328  228782 cri.go:89] found id: ""
	I1016 18:29:29.266354  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.266365  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:29.266372  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:29.266431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:29.296904  228782 cri.go:89] found id: ""
	I1016 18:29:29.296926  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.296936  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:29.296946  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:29.296957  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:29.389410  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:29.389443  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:29.404894  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:29.404925  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:29.463298  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:29.463323  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:29.463342  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.497484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:29.497513  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.548374  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:29.548408  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.574914  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:29.574946  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:29.630476  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:29.630506  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.164804  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:32.165219  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:32.165273  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:32.165322  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:32.192921  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.192940  228782 cri.go:89] found id: ""
	I1016 18:29:32.192947  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:32.193009  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.197494  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:32.197566  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:32.226679  228782 cri.go:89] found id: ""
	I1016 18:29:32.226706  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.226732  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:32.226740  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:32.226802  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:32.256127  228782 cri.go:89] found id: ""
	I1016 18:29:32.256152  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.256162  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:32.256170  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:32.256231  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:32.286329  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.286351  228782 cri.go:89] found id: ""
	I1016 18:29:32.286361  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:32.286418  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.290615  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:32.290687  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:32.318965  228782 cri.go:89] found id: ""
	I1016 18:29:32.318989  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.318999  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:32.319007  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:32.319086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:32.349977  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.350001  228782 cri.go:89] found id: ""
	I1016 18:29:32.350011  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:32.350084  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.354512  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:32.354578  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:32.381776  228782 cri.go:89] found id: ""
	I1016 18:29:32.381805  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.381814  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:32.381822  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:32.381884  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:32.413298  228782 cri.go:89] found id: ""
	I1016 18:29:32.413324  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.413335  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:32.413347  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:32.413360  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:32.472097  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:32.472114  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:32.472127  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.505633  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:32.505661  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.555025  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:32.555072  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.585744  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:32.585777  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:32.644161  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:32.644194  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.676157  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:32.676182  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:32.772828  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:32.772860  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:33.810778  254209 addons.go:514] duration metric: took 511.307538ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:34.093650  254209 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-523257" context rescaled to 1 replicas
	W1016 18:29:35.593703  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:29:35.291809  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:35.292347  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:35.292397  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:35.292449  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:35.320203  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.320224  228782 cri.go:89] found id: ""
	I1016 18:29:35.320231  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:35.320276  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.324296  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:35.324356  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:35.351958  228782 cri.go:89] found id: ""
	I1016 18:29:35.351982  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.351990  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:35.352012  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:35.352071  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:35.382337  228782 cri.go:89] found id: ""
	I1016 18:29:35.382364  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.382375  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:35.382382  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:35.382436  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:35.409388  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.409406  228782 cri.go:89] found id: ""
	I1016 18:29:35.409413  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:35.409455  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.413485  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:35.413543  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:35.440778  228782 cri.go:89] found id: ""
	I1016 18:29:35.440804  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.440812  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:35.440820  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:35.440896  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:35.466161  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.466184  228782 cri.go:89] found id: ""
	I1016 18:29:35.466193  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:35.466246  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.470498  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:35.470557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:35.498773  228782 cri.go:89] found id: ""
	I1016 18:29:35.498794  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.498800  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:35.498805  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:35.498850  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:35.525923  228782 cri.go:89] found id: ""
	I1016 18:29:35.525947  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.525956  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:35.525982  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:35.526000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.559484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:35.559519  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.615011  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:35.615051  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.642652  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:35.642687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:35.704004  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:35.704038  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:35.736269  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:35.736298  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:35.825956  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:35.825994  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:35.841899  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:35.841935  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:35.898506  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.400113  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:38.400540  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:38.400594  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:38.400649  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:38.427645  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.427665  228782 cri.go:89] found id: ""
	I1016 18:29:38.427674  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:38.427732  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.431841  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:38.431910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:38.459141  228782 cri.go:89] found id: ""
	I1016 18:29:38.459165  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.459175  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:38.459182  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:38.459238  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:38.486994  228782 cri.go:89] found id: ""
	I1016 18:29:38.487021  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.487032  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:38.487039  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:38.487100  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:38.514487  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.514508  228782 cri.go:89] found id: ""
	I1016 18:29:38.514515  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:38.514564  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.518661  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:38.518736  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:38.546066  228782 cri.go:89] found id: ""
	I1016 18:29:38.546087  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.546095  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:38.546100  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:38.546154  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:38.574022  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.574039  228782 cri.go:89] found id: ""
	I1016 18:29:38.574045  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:38.574087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.578237  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:38.578307  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:38.607676  228782 cri.go:89] found id: ""
	I1016 18:29:38.607699  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.607706  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:38.607736  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:38.607796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:38.635578  228782 cri.go:89] found id: ""
	I1016 18:29:38.635604  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.635615  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:38.635625  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:38.635640  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:38.694675  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.694699  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:38.694738  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.728850  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:38.728879  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.780750  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:38.780780  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.809679  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:38.809705  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:38.863006  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:38.863035  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:38.894630  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:38.894657  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:38.990653  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:38.990687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1016 18:29:38.093918  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	W1016 18:29:40.094527  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 16 18:28:58 no-preload-808539 crio[568]: time="2025-10-16T18:28:58.778881576Z" level=info msg="Started container" PID=1732 containerID=a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper id=3a458974-9446-4c05-86b7-2995171829b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5ee5b2aa8b77ec951b387864d228a26a00ac68df1f4e30fb7783dc23e86aac
	Oct 16 18:28:59 no-preload-808539 crio[568]: time="2025-10-16T18:28:59.72863572Z" level=info msg="Removing container: dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6" id=972e6b6d-ad53-457e-bb0d-76cf2824fa46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:28:59 no-preload-808539 crio[568]: time="2025-10-16T18:28:59.738853283Z" level=info msg="Removed container dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=972e6b6d-ad53-457e-bb0d-76cf2824fa46 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.655079539Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=578afea2-e0a0-45c2-a1ed-07083230f2cc name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.65607512Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1974088d-44b4-4513-aa31-6776c3a704b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.657255529Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=4f746d7b-6374-46c3-8bb4-1ebb853b4ccc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.657536256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.663424106Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.663958934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.70202868Z" level=info msg="Created container 08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=4f746d7b-6374-46c3-8bb4-1ebb853b4ccc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.702749775Z" level=info msg="Starting container: 08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d" id=34430631-24db-482c-92b6-2f80fb2a0d7b name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.705061062Z" level=info msg="Started container" PID=1742 containerID=08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper id=34430631-24db-482c-92b6-2f80fb2a0d7b name=/runtime.v1.RuntimeService/StartContainer sandboxID=be5ee5b2aa8b77ec951b387864d228a26a00ac68df1f4e30fb7783dc23e86aac
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.782226794Z" level=info msg="Removing container: a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e" id=8dfb8ac2-b9df-4c8f-8601-517a15db2fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:17 no-preload-808539 crio[568]: time="2025-10-16T18:29:17.792843833Z" level=info msg="Removed container a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f/dashboard-metrics-scraper" id=8dfb8ac2-b9df-4c8f-8601-517a15db2fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.78612746Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=cefa2179-6537-4db6-b33c-1d852d2ed518 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.787364524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a296e5e2-ca88-450b-b241-573d039f3eac name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.788350833Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=30e21518-c032-4a57-9580-baf87b6c84cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.788626755Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794648315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794881091Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/731c49cee117e97f19172c2bb3c09e6d98e754a58a737ffd8257bb7e87531534/merged/etc/passwd: no such file or directory"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.794916274Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/731c49cee117e97f19172c2bb3c09e6d98e754a58a737ffd8257bb7e87531534/merged/etc/group: no such file or directory"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.795233135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.833972796Z" level=info msg="Created container ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9: kube-system/storage-provisioner/storage-provisioner" id=30e21518-c032-4a57-9580-baf87b6c84cd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.834661728Z" level=info msg="Starting container: ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9" id=cc304b0d-aa66-4de1-a3bc-4ca27f2c6683 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:18 no-preload-808539 crio[568]: time="2025-10-16T18:29:18.837669216Z" level=info msg="Started container" PID=1756 containerID=ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9 description=kube-system/storage-provisioner/storage-provisioner id=cc304b0d-aa66-4de1-a3bc-4ca27f2c6683 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8db1dcd37a5e0bbf47c4d06d4bcb590260578eb9807867d33379d876807507
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ebf3196883c18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   3e8db1dcd37a5       storage-provisioner                          kube-system
	08876948c4f7d       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   be5ee5b2aa8b7       dashboard-metrics-scraper-6ffb444bf9-xpk9f   kubernetes-dashboard
	91a77615ada58       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   a19a833e89ed2       kubernetes-dashboard-855c9754f9-j8f8d        kubernetes-dashboard
	3de7cf0205d7d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   b7041baf44561       coredns-66bc5c9577-ntqqg                     kube-system
	151b9fc5c5caa       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   a12056f23d26b       busybox                                      default
	a093902546acd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   3e8db1dcd37a5       storage-provisioner                          kube-system
	9af550e59feff       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   2940a42013797       kindnet-kxznd                                kube-system
	c0468f3a79d7d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   c7f7d7001858c       kube-proxy-68kl9                             kube-system
	916c3b6d66243       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   15a9927fee995       kube-scheduler-no-preload-808539             kube-system
	4f293fe8269d1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   1abb064121f2a       kube-apiserver-no-preload-808539             kube-system
	7181b04bfb82e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   4289d00c455b9       kube-controller-manager-no-preload-808539    kube-system
	36d3ec65570d3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   75e9fe117e64f       etcd-no-preload-808539                       kube-system
	
	
	==> coredns [3de7cf0205d7d6eeac5cc2e822d62c8b8946ba8f92cbf91e763dd4318fd7e3c7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59304 - 21226 "HINFO IN 744875417056776112.4312268298637680400. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034905363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-808539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-808539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=no-preload-808539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_27_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:27:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-808539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:29:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:27:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:29:18 +0000   Thu, 16 Oct 2025 18:28:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-808539
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                738a1706-7fde-4f71-a519-e3178e828487
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-ntqqg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-808539                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-kxznd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-808539              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-808539     200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-68kl9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-808539              100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xpk9f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j8f8d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node no-preload-808539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node no-preload-808539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node no-preload-808539 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node no-preload-808539 event: Registered Node no-preload-808539 in Controller
	  Normal  NodeReady                97s                kubelet          Node no-preload-808539 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-808539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-808539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-808539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node no-preload-808539 event: Registered Node no-preload-808539 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [36d3ec65570d3105d713c2d5a8f592c5757f5b797e08265d5e50fa232714f4ec] <==
	{"level":"warn","ts":"2025-10-16T18:28:46.657915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.673927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.680314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.686568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.692984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.700262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.706569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.715261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.723652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.736962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.743691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.751379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.758518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.766923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.776230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.791971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.803105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.810590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.818104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.824621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.838968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.842791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.850478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.857981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:28:46.911295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47276","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:29:44 up  1:12,  0 user,  load average: 3.51, 2.70, 1.75
	Linux no-preload-808539 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9af550e59feffaa80d88161ffa36ffd9b00a7f1c63f27efce7435d4fb3f0f71a] <==
	I1016 18:28:48.260656       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:28:48.261092       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:28:48.261266       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:28:48.261286       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:28:48.261309       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:28:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:28:48.478382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:28:48.478412       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:28:48.478433       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:28:48.479391       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:28:48.779230       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:28:48.779280       1 metrics.go:72] Registering metrics
	I1016 18:28:48.779387       1 controller.go:711] "Syncing nftables rules"
	I1016 18:28:58.477801       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:28:58.477870       1 main.go:301] handling current node
	I1016 18:29:08.478774       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:08.478806       1 main.go:301] handling current node
	I1016 18:29:18.478063       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:18.478126       1 main.go:301] handling current node
	I1016 18:29:28.478159       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:28.478196       1 main.go:301] handling current node
	I1016 18:29:38.486838       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1016 18:29:38.486876       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f293fe8269d1d295e9d15b52d72bb19e3d1f3c9099a4102dec127e207a05b13] <==
	I1016 18:28:47.392867       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:28:47.392920       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:28:47.392966       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:28:47.393207       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:28:47.393215       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:28:47.393220       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:28:47.393225       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:28:47.398037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 18:28:47.399101       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:28:47.407115       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:28:47.417490       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:28:47.417523       1 policy_source.go:240] refreshing policies
	I1016 18:28:47.428503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:28:47.630646       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:28:47.647474       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:28:47.670510       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:28:47.699191       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:28:47.706651       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:28:47.750824       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.153.115"}
	I1016 18:28:47.764443       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.5.198"}
	I1016 18:28:48.295818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:28:50.794952       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:28:50.842880       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:28:51.291138       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:28:51.291147       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7181b04bfb82e037325297ecffa17ead24bea639b33b265693a70609af2e891c] <==
	I1016 18:28:50.738050       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:28:50.738158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 18:28:50.738258       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 18:28:50.738305       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:28:50.738316       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:28:50.738325       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:28:50.738309       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:28:50.738362       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-808539"
	I1016 18:28:50.738420       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1016 18:28:50.739086       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:28:50.741633       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:28:50.742422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:28:50.743997       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:28:50.745243       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:28:50.745288       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:28:50.745321       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:28:50.745249       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:28:50.745328       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:28:50.745385       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:28:50.745483       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:28:50.748650       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:28:50.750323       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:28:50.755309       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:28:50.756568       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:28:50.759697       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [c0468f3a79d7d838f56df1eb32a946b34b2c3ab791c04e2980dbd98bdf6559e9] <==
	I1016 18:28:48.065410       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:28:48.130752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:28:48.231551       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:28:48.231590       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:28:48.231705       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:28:48.251543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:28:48.251609       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:28:48.257243       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:28:48.257776       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:28:48.257813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:28:48.261423       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:28:48.261436       1 config.go:200] "Starting service config controller"
	I1016 18:28:48.261446       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:28:48.261449       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:28:48.261470       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:28:48.261488       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:28:48.261535       1 config.go:309] "Starting node config controller"
	I1016 18:28:48.261544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:28:48.361605       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:28:48.361607       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:28:48.361653       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:28:48.361738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [916c3b6d662439d89a451d927be5cafe6a0fca42419d42bd59af6042bb15ceea] <==
	I1016 18:28:47.307972       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:28:48.119263       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:28:48.119288       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:28:48.124053       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 18:28:48.124284       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 18:28:48.124393       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.124446       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.124448       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:28:48.124931       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:28:48.125890       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:28:48.125929       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:28:48.224639       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1016 18:28:48.224639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:28:48.225733       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463553     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98611c84-133a-4ab8-992f-3f5889238b0e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-j8f8d\" (UID: \"98611c84-133a-4ab8-992f-3f5889238b0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463626     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cbh\" (UniqueName: \"kubernetes.io/projected/1ecf1060-060b-41cd-a215-9ddf9b9e68d5-kube-api-access-z8cbh\") pod \"dashboard-metrics-scraper-6ffb444bf9-xpk9f\" (UID: \"1ecf1060-060b-41cd-a215-9ddf9b9e68d5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463653     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjxmb\" (UniqueName: \"kubernetes.io/projected/98611c84-133a-4ab8-992f-3f5889238b0e-kube-api-access-cjxmb\") pod \"kubernetes-dashboard-855c9754f9-j8f8d\" (UID: \"98611c84-133a-4ab8-992f-3f5889238b0e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d"
	Oct 16 18:28:51 no-preload-808539 kubelet[711]: I1016 18:28:51.463743     711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1ecf1060-060b-41cd-a215-9ddf9b9e68d5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xpk9f\" (UID: \"1ecf1060-060b-41cd-a215-9ddf9b9e68d5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f"
	Oct 16 18:28:55 no-preload-808539 kubelet[711]: I1016 18:28:55.915945     711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:28:56 no-preload-808539 kubelet[711]: I1016 18:28:56.751550     711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j8f8d" podStartSLOduration=1.6293062740000002 podStartE2EDuration="5.751527639s" podCreationTimestamp="2025-10-16 18:28:51 +0000 UTC" firstStartedPulling="2025-10-16 18:28:51.702018847 +0000 UTC m=+7.152122961" lastFinishedPulling="2025-10-16 18:28:55.824240215 +0000 UTC m=+11.274344326" observedRunningTime="2025-10-16 18:28:56.750884026 +0000 UTC m=+12.200988154" watchObservedRunningTime="2025-10-16 18:28:56.751527639 +0000 UTC m=+12.201631770"
	Oct 16 18:28:58 no-preload-808539 kubelet[711]: I1016 18:28:58.720091     711 scope.go:117] "RemoveContainer" containerID="dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: I1016 18:28:59.727129     711 scope.go:117] "RemoveContainer" containerID="dc039ad879b28002d2a75b23e31ba73171d04a6d336d24f256364e198f6302b6"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: I1016 18:28:59.727295     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:28:59 no-preload-808539 kubelet[711]: E1016 18:28:59.727478     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:00 no-preload-808539 kubelet[711]: I1016 18:29:00.732811     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:00 no-preload-808539 kubelet[711]: E1016 18:29:00.732995     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:06 no-preload-808539 kubelet[711]: I1016 18:29:06.881984     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:06 no-preload-808539 kubelet[711]: E1016 18:29:06.882229     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.654509     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.780870     711 scope.go:117] "RemoveContainer" containerID="a8ff625b8a2049faf833b18bdf18fbcd11c8d071abb2f0cef12f35ddbb8f896e"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: I1016 18:29:17.781104     711 scope.go:117] "RemoveContainer" containerID="08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	Oct 16 18:29:17 no-preload-808539 kubelet[711]: E1016 18:29:17.781325     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:18 no-preload-808539 kubelet[711]: I1016 18:29:18.785516     711 scope.go:117] "RemoveContainer" containerID="a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2"
	Oct 16 18:29:26 no-preload-808539 kubelet[711]: I1016 18:29:26.881876     711 scope.go:117] "RemoveContainer" containerID="08876948c4f7dfb4079f76cc0a99927216b6d250c7e21b297512890297bcaa9d"
	Oct 16 18:29:26 no-preload-808539 kubelet[711]: E1016 18:29:26.882067     711 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xpk9f_kubernetes-dashboard(1ecf1060-060b-41cd-a215-9ddf9b9e68d5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xpk9f" podUID="1ecf1060-060b-41cd-a215-9ddf9b9e68d5"
	Oct 16 18:29:39 no-preload-808539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:29:39 no-preload-808539 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:29:39 no-preload-808539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:29:39 no-preload-808539 systemd[1]: kubelet.service: Consumed 1.804s CPU time.
	
	
	==> kubernetes-dashboard [91a77615ada5800866478c73b61ad9458c9aab68602263b4fbb76cbe49d2c275] <==
	2025/10/16 18:28:55 Using namespace: kubernetes-dashboard
	2025/10/16 18:28:55 Using in-cluster config to connect to apiserver
	2025/10/16 18:28:55 Using secret token for csrf signing
	2025/10/16 18:28:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:28:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:28:55 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:28:55 Generating JWE encryption key
	2025/10/16 18:28:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:28:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:28:56 Initializing JWE encryption key from synchronized object
	2025/10/16 18:28:56 Creating in-cluster Sidecar client
	2025/10/16 18:28:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:56 Serving insecurely on HTTP port: 9090
	2025/10/16 18:29:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:28:55 Starting overwatch
	
	
	==> storage-provisioner [a093902546acd6ce48370566d454810105657ad4e3a0b5c22c8d50931991d0f2] <==
	I1016 18:28:48.035294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:29:18.038159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ebf3196883c18d487165f285301c9acb4041875447091801dea9902d984ed8e9] <==
	I1016 18:29:18.849677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:29:18.857821       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:29:18.857867       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:29:18.860396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:22.315463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:26.576158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:30.174874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:33.229215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.251450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.255883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:36.256022       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:29:36.256196       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"403a8f30-1976-4add-8440-a3609b846a31", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4 became leader
	I1016 18:29:36.256222       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4!
	W1016 18:29:36.258335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.262011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:36.356443       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-808539_3190ad05-4de0-4506-866e-f0ae8f8714c4!
	W1016 18:29:38.265018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:38.268836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.272189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.276793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.280775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.287969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:44.291539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:44.295474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-808539 -n no-preload-808539
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-808539 -n no-preload-808539: exit status 2 (329.873469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-808539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (255.801813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:29:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-063117 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-063117 describe deploy/metrics-server -n kube-system: exit status 1 (58.267334ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-063117 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-063117
helpers_test.go:243: (dbg) docker inspect embed-certs-063117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	        "Created": "2025-10-16T18:28:54.918690306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 250166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:28:55.562251391Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hosts",
	        "LogPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1-json.log",
	        "Name": "/embed-certs-063117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-063117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-063117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	                "LowerDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-063117",
	                "Source": "/var/lib/docker/volumes/embed-certs-063117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-063117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-063117",
	                "name.minikube.sigs.k8s.io": "embed-certs-063117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "94c9180a69b12cca51a7e69324cd8a9ca1ab5770568e7b75f47ef430c8eac16b",
	            "SandboxKey": "/var/run/docker/netns/94c9180a69b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-063117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:11:ce:1c:c8:54",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d58ff291817e0d805fb2a74d398badc9c07572e1fefc22609c9ab31d677b2e36",
	                    "EndpointID": "2784cc1080c439737478903d081cb96dc1585346d5f8b594f6d9307b3b5a65ab",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-063117",
	                        "1fe6653a430a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25: (1.157510903s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-294813       │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:27 UTC │
	│ stop    │ -p kubernetes-upgrade-750025                                                                                                                                                                                                                  │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │ 16 Oct 25 18:26 UTC │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:26 UTC │                     │
	│ delete  │ -p missing-upgrade-294813                                                                                                                                                                                                                     │ missing-upgrade-294813       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-956814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │                     │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ stop    │ -p old-k8s-version-956814 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:29:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:29:07.040256  254209 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:29:07.040551  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040562  254209 out.go:374] Setting ErrFile to fd 2...
	I1016 18:29:07.040565  254209 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:29:07.040803  254209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:29:07.041325  254209 out.go:368] Setting JSON to false
	I1016 18:29:07.042806  254209 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4295,"bootTime":1760635052,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:29:07.042932  254209 start.go:141] virtualization: kvm guest
	I1016 18:29:07.045364  254209 out.go:179] * [default-k8s-diff-port-523257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:29:07.046957  254209 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:29:07.046958  254209 notify.go:220] Checking for updates...
	I1016 18:29:07.050966  254209 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:29:07.052908  254209 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:07.054502  254209 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:29:07.055956  254209 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:29:07.057344  254209 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:29:07.059464  254209 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059605  254209 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059765  254209 config.go:182] Loaded profile config "no-preload-808539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:07.059863  254209 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:29:07.085980  254209 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:29:07.086152  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.152740  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.141947952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.152862  254209 docker.go:318] overlay module found
	I1016 18:29:07.154961  254209 out.go:179] * Using the docker driver based on user configuration
	I1016 18:29:07.156386  254209 start.go:305] selected driver: docker
	I1016 18:29:07.156405  254209 start.go:925] validating driver "docker" against <nil>
	I1016 18:29:07.156417  254209 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:29:07.157063  254209 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:29:07.222394  254209 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-16 18:29:07.211344644 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:29:07.222535  254209 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:29:07.222748  254209 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:07.224789  254209 out.go:179] * Using Docker driver with root privileges
	I1016 18:29:07.226432  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:07.226503  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:07.226522  254209 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1016 18:29:07.226597  254209 start.go:349] cluster config:
	{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:07.228189  254209 out.go:179] * Starting "default-k8s-diff-port-523257" primary control-plane node in "default-k8s-diff-port-523257" cluster
	I1016 18:29:07.229711  254209 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:29:07.231414  254209 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:29:07.232838  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.232890  254209 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:29:07.232901  254209 cache.go:58] Caching tarball of preloaded images
	I1016 18:29:07.232950  254209 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:29:07.233007  254209 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:29:07.233023  254209 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:29:07.233110  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:07.233129  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json: {Name:mkc8f0a47ba498cd8655372776f58860c7a1a49d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:07.255362  254209 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:29:07.255388  254209 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:29:07.255409  254209 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:29:07.255451  254209 start.go:360] acquireMachinesLock for default-k8s-diff-port-523257: {Name:mk0ef672dc84306ea126d15d9b249684df6a69ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:29:07.255579  254209 start.go:364] duration metric: took 109.249µs to acquireMachinesLock for "default-k8s-diff-port-523257"
	I1016 18:29:07.255609  254209 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:07.255702  254209 start.go:125] createHost starting for "" (driver="docker")
	W1016 18:29:05.418755  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:07.419105  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:04.081460  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:04.081500  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:06.598777  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:06.599234  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:06.599283  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:06.599337  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:06.632534  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:06.632559  228782 cri.go:89] found id: ""
	I1016 18:29:06.632566  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:06.632623  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.636735  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:06.636800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:06.670881  228782 cri.go:89] found id: ""
	I1016 18:29:06.670915  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.670928  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:06.670937  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:06.670990  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:06.701324  228782 cri.go:89] found id: ""
	I1016 18:29:06.701352  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.701362  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:06.701370  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:06.701431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:06.735895  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.735922  228782 cri.go:89] found id: ""
	I1016 18:29:06.735930  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:06.735980  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.741105  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:06.741178  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:06.774597  228782 cri.go:89] found id: ""
	I1016 18:29:06.774618  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.774625  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:06.774632  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:06.774674  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:06.806134  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.806153  228782 cri.go:89] found id: ""
	I1016 18:29:06.806163  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:06.806215  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:06.811555  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:06.811627  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:06.846430  228782 cri.go:89] found id: ""
	I1016 18:29:06.846456  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.846465  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:06.846472  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:06.846528  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:06.878395  228782 cri.go:89] found id: ""
	I1016 18:29:06.878419  228782 logs.go:282] 0 containers: []
	W1016 18:29:06.878430  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:06.878440  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:06.878454  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:06.938432  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:06.938467  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:06.970056  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:06.970085  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:07.027971  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:07.028000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:07.064564  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:07.064596  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:07.164562  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:07.164594  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:07.185438  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:07.185470  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:07.260040  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:07.260063  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:07.260077  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:07.258815  254209 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:29:07.259101  254209 start.go:159] libmachine.API.Create for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:07.259145  254209 client.go:168] LocalClient.Create starting
	I1016 18:29:07.259324  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:29:07.259400  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259427  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.259512  254209 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:29:07.259555  254209 main.go:141] libmachine: Decoding PEM data...
	I1016 18:29:07.259573  254209 main.go:141] libmachine: Parsing certificate...
	I1016 18:29:07.260104  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:29:07.281148  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:29:07.281225  254209 network_create.go:284] running [docker network inspect default-k8s-diff-port-523257] to gather additional debugging logs...
	I1016 18:29:07.281243  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257
	W1016 18:29:07.301649  254209 cli_runner.go:211] docker network inspect default-k8s-diff-port-523257 returned with exit code 1
	I1016 18:29:07.301683  254209 network_create.go:287] error running [docker network inspect default-k8s-diff-port-523257]: docker network inspect default-k8s-diff-port-523257: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-523257 not found
	I1016 18:29:07.301701  254209 network_create.go:289] output of [docker network inspect default-k8s-diff-port-523257]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-523257 not found
	
	** /stderr **
	I1016 18:29:07.301822  254209 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:07.322829  254209 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:29:07.323663  254209 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:29:07.324428  254209 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:29:07.324921  254209 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07ac2eb0982 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:2a:d5:21:5c:9c} reservation:<nil>}
	I1016 18:29:07.325701  254209 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea8b80}
	I1016 18:29:07.325766  254209 network_create.go:124] attempt to create docker network default-k8s-diff-port-523257 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1016 18:29:07.325819  254209 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 default-k8s-diff-port-523257
	I1016 18:29:07.389443  254209 network_create.go:108] docker network default-k8s-diff-port-523257 192.168.85.0/24 created
	I1016 18:29:07.389474  254209 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-523257" container
	I1016 18:29:07.389534  254209 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:29:07.408685  254209 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-523257 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:29:07.429641  254209 oci.go:103] Successfully created a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.429766  254209 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-523257-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --entrypoint /usr/bin/test -v default-k8s-diff-port-523257:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:29:07.867408  254209 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-523257
	I1016 18:29:07.867462  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:07.867483  254209 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:29:07.867554  254209 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:29:11.718052  254209 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523257:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.850427538s)
	I1016 18:29:11.718089  254209 kic.go:203] duration metric: took 3.850601984s to extract preloaded images to volume ...
	W1016 18:29:11.718202  254209 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:29:11.718242  254209 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:29:11.718287  254209 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:29:11.783561  254209 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-523257 --name default-k8s-diff-port-523257 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-523257 --network default-k8s-diff-port-523257 --ip 192.168.85.2 --volume default-k8s-diff-port-523257:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	W1016 18:29:09.920187  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:11.920840  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:09.798326  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:09.798815  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:09.798876  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:09.798935  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:09.834829  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:09.834862  228782 cri.go:89] found id: ""
	I1016 18:29:09.834871  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:09.834929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.840366  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:09.840444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:09.872774  228782 cri.go:89] found id: ""
	I1016 18:29:09.872802  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.872812  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:09.872819  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:09.872878  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:09.909210  228782 cri.go:89] found id: ""
	I1016 18:29:09.909236  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.909247  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:09.909255  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:09.909312  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:09.945086  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:09.945108  228782 cri.go:89] found id: ""
	I1016 18:29:09.945117  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:09.945174  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:09.950041  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:09.950103  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:09.987902  228782 cri.go:89] found id: ""
	I1016 18:29:09.987927  228782 logs.go:282] 0 containers: []
	W1016 18:29:09.987938  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:09.987949  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:09.988003  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:10.021037  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:10.021074  228782 cri.go:89] found id: ""
	I1016 18:29:10.021082  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:10.021134  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:10.026004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:10.026077  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:10.055087  228782 cri.go:89] found id: ""
	I1016 18:29:10.055111  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.055121  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:10.055135  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:10.055193  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:10.085674  228782 cri.go:89] found id: ""
	I1016 18:29:10.085703  228782 logs.go:282] 0 containers: []
	W1016 18:29:10.085737  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:10.085750  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:10.085763  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:10.164177  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:10.164213  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:10.199764  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:10.199797  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:10.318961  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:10.318998  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:10.347541  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:10.347582  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:10.426635  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:10.426658  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:10.426673  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:10.460893  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:10.460927  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:10.514361  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:10.514395  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.045784  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:13.046220  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:13.046274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:13.046330  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:13.079185  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.079212  228782 cri.go:89] found id: ""
	I1016 18:29:13.079222  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:13.079289  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.083978  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:13.084050  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:13.114350  228782 cri.go:89] found id: ""
	I1016 18:29:13.114374  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.114385  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:13.114392  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:13.114444  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:13.141976  228782 cri.go:89] found id: ""
	I1016 18:29:13.142002  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.142010  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:13.142016  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:13.142086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:13.174818  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.174848  228782 cri.go:89] found id: ""
	I1016 18:29:13.174858  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:13.174909  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.179004  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:13.179070  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:13.214403  228782 cri.go:89] found id: ""
	I1016 18:29:13.214431  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.214442  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:13.214449  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:13.214507  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:13.246810  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:13.246834  228782 cri.go:89] found id: ""
	I1016 18:29:13.246844  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:13.246902  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:13.251623  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:13.251685  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:13.283291  228782 cri.go:89] found id: ""
	I1016 18:29:13.283318  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.283329  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:13.283339  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:13.283388  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:13.311343  228782 cri.go:89] found id: ""
	I1016 18:29:13.311368  228782 logs.go:282] 0 containers: []
	W1016 18:29:13.311376  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:13.311383  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:13.311396  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:13.368339  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:13.368377  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:13.398197  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:13.398227  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:13.511753  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:13.511788  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:13.529854  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:13.529890  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:13.602327  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:13.602347  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:13.602359  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:13.636600  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:13.636635  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:13.688431  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:13.688469  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:14.812495  249491 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:14.812565  249491 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:14.812651  249491 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:14.812697  249491 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:14.812750  249491 kubeadm.go:318] OS: Linux
	I1016 18:29:14.812798  249491 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:14.812846  249491 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:14.812885  249491 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:14.812952  249491 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:14.812998  249491 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:14.813044  249491 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:14.813153  249491 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:14.813231  249491 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:14.813325  249491 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:14.813441  249491 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:14.813562  249491 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:14.813642  249491 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:14.815445  249491 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:14.815539  249491 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:14.815602  249491 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:14.815663  249491 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:14.815743  249491 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:14.815797  249491 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:14.815883  249491 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:14.815954  249491 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:14.816076  249491 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816123  249491 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:14.816240  249491 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-063117 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:29:14.816345  249491 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:14.816434  249491 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:14.816488  249491 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:14.816537  249491 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:14.816611  249491 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:14.816701  249491 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:14.816787  249491 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:14.816885  249491 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:14.816956  249491 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:14.817026  249491 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:14.817091  249491 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:14.818496  249491 out.go:252]   - Booting up control plane ...
	I1016 18:29:14.818580  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:14.818643  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:14.818755  249491 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:14.818887  249491 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:14.819010  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:14.819110  249491 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:14.819187  249491 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:14.819224  249491 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:14.819345  249491 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:14.819458  249491 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:14.819519  249491 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500924512s
	I1016 18:29:14.819610  249491 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:14.819682  249491 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1016 18:29:14.819785  249491 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:14.819861  249491 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:14.819937  249491 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.311071654s
	I1016 18:29:14.819995  249491 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.104436473s
	I1016 18:29:14.820062  249491 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.00209408s
	I1016 18:29:14.820157  249491 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:14.820281  249491 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:14.820375  249491 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:14.820585  249491 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-063117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:14.820666  249491 kubeadm.go:318] [bootstrap-token] Using token: 5rsifa.smk486u4t69rbatb
	I1016 18:29:14.822434  249491 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:14.822560  249491 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:14.822656  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:14.822845  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:14.823060  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:14.823170  249491 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:14.823249  249491 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:14.823359  249491 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:14.823399  249491 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:14.823440  249491 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:14.823446  249491 kubeadm.go:318] 
	I1016 18:29:14.823500  249491 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:14.823519  249491 kubeadm.go:318] 
	I1016 18:29:14.823599  249491 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:14.823606  249491 kubeadm.go:318] 
	I1016 18:29:14.823628  249491 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:14.823679  249491 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:14.823767  249491 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:14.823775  249491 kubeadm.go:318] 
	I1016 18:29:14.823844  249491 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:14.823859  249491 kubeadm.go:318] 
	I1016 18:29:14.823926  249491 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:14.823936  249491 kubeadm.go:318] 
	I1016 18:29:14.824017  249491 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:14.824127  249491 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:14.824285  249491 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:14.824304  249491 kubeadm.go:318] 
	I1016 18:29:14.824446  249491 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:14.824583  249491 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:14.824596  249491 kubeadm.go:318] 
	I1016 18:29:14.824739  249491 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.824843  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:14.824866  249491 kubeadm.go:318] 	--control-plane 
	I1016 18:29:14.824870  249491 kubeadm.go:318] 
	I1016 18:29:14.824963  249491 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:14.824974  249491 kubeadm.go:318] 
	I1016 18:29:14.825046  249491 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5rsifa.smk486u4t69rbatb \
	I1016 18:29:14.825152  249491 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:14.825162  249491 cni.go:84] Creating CNI manager for ""
	I1016 18:29:14.825169  249491 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:14.826898  249491 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:12.063356  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Running}}
	I1016 18:29:12.082378  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.101794  254209 cli_runner.go:164] Run: docker exec default-k8s-diff-port-523257 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:29:12.150828  254209 oci.go:144] the created container "default-k8s-diff-port-523257" has a running status.
	I1016 18:29:12.150862  254209 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa...
	I1016 18:29:12.360966  254209 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:29:12.395477  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.421296  254209 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:29:12.421318  254209 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-523257 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:29:12.475647  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:12.500605  254209 machine.go:93] provisionDockerMachine start ...
	I1016 18:29:12.500741  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.520832  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.521147  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.521169  254209 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:29:12.668259  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.668290  254209 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523257"
	I1016 18:29:12.668359  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.690556  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.690997  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.691041  254209 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523257 && echo "default-k8s-diff-port-523257" | sudo tee /etc/hostname
	I1016 18:29:12.853318  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:29:12.853397  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:12.874368  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:12.875979  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:12.876032  254209 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523257/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:29:13.023166  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:29:13.023197  254209 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:29:13.023247  254209 ubuntu.go:190] setting up certificates
	I1016 18:29:13.023261  254209 provision.go:84] configureAuth start
	I1016 18:29:13.023324  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.044297  254209 provision.go:143] copyHostCerts
	I1016 18:29:13.044377  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:29:13.044387  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:29:13.044480  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:29:13.044612  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:29:13.044620  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:29:13.044665  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:29:13.044833  254209 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:29:13.044854  254209 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:29:13.044899  254209 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:29:13.044986  254209 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523257 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-523257 localhost minikube]
	I1016 18:29:13.322042  254209 provision.go:177] copyRemoteCerts
	I1016 18:29:13.322098  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:29:13.322130  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.341345  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.443517  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:29:13.466314  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:29:13.488307  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 18:29:13.510900  254209 provision.go:87] duration metric: took 487.621457ms to configureAuth
	I1016 18:29:13.510932  254209 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:29:13.511156  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:13.511275  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.533416  254209 main.go:141] libmachine: Using SSH client type: native
	I1016 18:29:13.533709  254209 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1016 18:29:13.533754  254209 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:29:13.799038  254209 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:29:13.799068  254209 machine.go:96] duration metric: took 1.2984414s to provisionDockerMachine
	I1016 18:29:13.799083  254209 client.go:171] duration metric: took 6.539927602s to LocalClient.Create
	I1016 18:29:13.799111  254209 start.go:167] duration metric: took 6.540012376s to libmachine.API.Create "default-k8s-diff-port-523257"
	I1016 18:29:13.799126  254209 start.go:293] postStartSetup for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:29:13.799140  254209 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:29:13.799211  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:29:13.799291  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:13.819622  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:13.924749  254209 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:29:13.928900  254209 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:29:13.928949  254209 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:29:13.928962  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:29:13.929014  254209 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:29:13.929153  254209 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:29:13.929270  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:29:13.938068  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:13.959350  254209 start.go:296] duration metric: took 160.208327ms for postStartSetup
	I1016 18:29:13.959772  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:13.981564  254209 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:29:13.981929  254209 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:29:13.981986  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.002862  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.105028  254209 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:29:14.109906  254209 start.go:128] duration metric: took 6.854190815s to createHost
	I1016 18:29:14.109928  254209 start.go:83] releasing machines lock for "default-k8s-diff-port-523257", held for 6.854337757s
	I1016 18:29:14.109985  254209 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:29:14.129342  254209 ssh_runner.go:195] Run: cat /version.json
	I1016 18:29:14.129364  254209 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:29:14.129388  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.129427  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:14.148145  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.148510  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:14.301264  254209 ssh_runner.go:195] Run: systemctl --version
	I1016 18:29:14.308012  254209 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:29:14.343595  254209 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:29:14.348610  254209 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:29:14.348680  254209 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:29:14.374585  254209 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:29:14.374606  254209 start.go:495] detecting cgroup driver to use...
	I1016 18:29:14.374641  254209 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:29:14.374699  254209 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:29:14.390967  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:29:14.404114  254209 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:29:14.404173  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:29:14.423858  254209 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:29:14.443353  254209 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:29:14.528065  254209 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:29:14.616017  254209 docker.go:234] disabling docker service ...
	I1016 18:29:14.616093  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:29:14.636286  254209 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:29:14.649917  254209 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:29:14.738496  254209 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:29:14.830481  254209 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:29:14.844213  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:29:14.860041  254209 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:29:14.860111  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.871530  254209 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:29:14.871599  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.882155  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.891583  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.901751  254209 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:29:14.911126  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.923235  254209 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.940508  254209 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:29:14.951261  254209 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:29:14.961600  254209 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:29:14.969949  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.065750  254209 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:29:15.196909  254209 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:29:15.197013  254209 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:29:15.201701  254209 start.go:563] Will wait 60s for crictl version
	I1016 18:29:15.201777  254209 ssh_runner.go:195] Run: which crictl
	I1016 18:29:15.205695  254209 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:29:15.235561  254209 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:29:15.235649  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.265880  254209 ssh_runner.go:195] Run: crio --version
	I1016 18:29:15.296467  254209 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:29:15.297746  254209 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:29:15.315570  254209 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 18:29:15.319846  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.330320  254209 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:29:15.330442  254209 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:29:15.330496  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.362598  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.362621  254209 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:29:15.362681  254209 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:29:15.388591  254209 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:29:15.388610  254209 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:29:15.388617  254209 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1016 18:29:15.388687  254209 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:29:15.388767  254209 ssh_runner.go:195] Run: crio config
	I1016 18:29:15.438126  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:15.438153  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:15.438169  254209 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:29:15.438189  254209 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523257 NodeName:default-k8s-diff-port-523257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:29:15.438304  254209 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:29:15.438360  254209 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:29:15.446851  254209 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:29:15.446904  254209 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:29:15.455376  254209 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 18:29:15.468422  254209 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:29:15.485061  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1016 18:29:15.499028  254209 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:29:15.502992  254209 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:29:15.514119  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:15.600483  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:15.628358  254209 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257 for IP: 192.168.85.2
	I1016 18:29:15.628376  254209 certs.go:195] generating shared ca certs ...
	I1016 18:29:15.628396  254209 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.628509  254209 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:29:15.628562  254209 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:29:15.628573  254209 certs.go:257] generating profile certs ...
	I1016 18:29:15.628628  254209 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key
	I1016 18:29:15.628653  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt with IP's: []
	I1016 18:29:15.968981  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt ...
	I1016 18:29:15.969015  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.crt: {Name:mkc48781ddaf69d7e01ca677e4849b4caaee56c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969236  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key ...
	I1016 18:29:15.969256  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key: {Name:mkc621b8b4bfad359a056391feef8110384c6c9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:15.969390  254209 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c
	I1016 18:29:15.969417  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1016 18:29:16.391278  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c ...
	I1016 18:29:16.391304  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c: {Name:mk6cc283b84aa2fe24d23bc336c141b44112e826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391464  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c ...
	I1016 18:29:16.391483  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c: {Name:mkcaa57ee51fbf6de8c055b9c377d12f3a0aabf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.391560  254209 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt
	I1016 18:29:16.391667  254209 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key
	I1016 18:29:16.391772  254209 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key
	I1016 18:29:16.391791  254209 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt with IP's: []
	I1016 18:29:16.512660  254209 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt ...
	I1016 18:29:16.512692  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt: {Name:mk2207d19f2814a793ac863fddc556c919eb7e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.512893  254209 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key ...
	I1016 18:29:16.512912  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key: {Name:mk634f24088d880b43b87026568c66491c8f3f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:16.513157  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:29:16.513208  254209 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:29:16.513224  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:29:16.513258  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:29:16.513299  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:29:16.513332  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:29:16.513390  254209 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:29:16.514000  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:29:16.534467  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:29:16.553911  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:29:16.572888  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:29:16.593316  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:29:16.613396  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:29:16.633847  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:29:16.652859  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:29:16.671301  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:29:16.692139  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:29:16.711854  254209 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:29:16.733100  254209 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:29:16.748870  254209 ssh_runner.go:195] Run: openssl version
	I1016 18:29:16.756698  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:29:16.765852  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770890  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.770951  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:29:16.809579  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:29:16.818448  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:29:16.828572  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833466  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.833518  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:29:16.869942  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:29:16.879161  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:29:16.888390  254209 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892672  254209 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.892743  254209 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:29:16.928324  254209 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:29:16.937883  254209 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:29:16.941427  254209 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:29:16.941477  254209 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:29:16.941533  254209 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:29:16.941590  254209 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:29:16.969823  254209 cri.go:89] found id: ""
	I1016 18:29:16.969879  254209 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:29:16.978105  254209 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:29:16.986454  254209 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:29:16.986509  254209 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:29:16.994659  254209 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:29:16.994677  254209 kubeadm.go:157] found existing configuration files:
	
	I1016 18:29:16.994734  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1016 18:29:17.002515  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:29:17.002569  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:29:17.010005  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1016 18:29:17.017762  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:29:17.017809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:29:17.025281  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1016 18:29:17.033745  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:29:17.033809  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	W1016 18:29:14.418032  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:16.918331  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:16.216787  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:16.217184  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:16.217232  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:16.217290  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:16.260046  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.260069  228782 cri.go:89] found id: ""
	I1016 18:29:16.260081  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:16.260138  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.264404  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:16.264461  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:16.292812  228782 cri.go:89] found id: ""
	I1016 18:29:16.292840  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.292849  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:16.292857  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:16.292916  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:16.320501  228782 cri.go:89] found id: ""
	I1016 18:29:16.320525  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.320537  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:16.320543  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:16.320601  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:16.349176  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:16.349201  228782 cri.go:89] found id: ""
	I1016 18:29:16.349211  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:16.349261  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.353478  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:16.353557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:16.381526  228782 cri.go:89] found id: ""
	I1016 18:29:16.381551  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.381560  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:16.381566  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:16.381622  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:16.410669  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.410688  228782 cri.go:89] found id: ""
	I1016 18:29:16.410698  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:16.410766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:16.415132  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:16.415201  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:16.444976  228782 cri.go:89] found id: ""
	I1016 18:29:16.445004  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.445015  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:16.445023  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:16.445079  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:16.476137  228782 cri.go:89] found id: ""
	I1016 18:29:16.476164  228782 logs.go:282] 0 containers: []
	W1016 18:29:16.476174  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:16.476185  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:16.476198  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:16.507953  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:16.507978  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:16.570051  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:16.570092  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:16.603032  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:16.603070  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:16.693780  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:16.693814  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:16.710844  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:16.710881  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:16.773893  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:16.773917  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:16.773931  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:16.807340  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:16.807368  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:14.828263  249491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:14.833625  249491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:14.833646  249491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:14.848089  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:15.084417  249491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:15.084527  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.084544  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-063117 minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=embed-certs-063117 minikube.k8s.io/primary=true
	I1016 18:29:15.180501  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:15.180512  249491 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:15.681132  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.180980  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:16.681259  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.181627  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:17.681148  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.180852  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:18.681519  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.180964  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.681224  249491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:19.769979  249491 kubeadm.go:1113] duration metric: took 4.685530547s to wait for elevateKubeSystemPrivileges
	I1016 18:29:19.770014  249491 kubeadm.go:402] duration metric: took 18.251827782s to StartCluster
	I1016 18:29:19.770034  249491 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.770128  249491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:19.771546  249491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:19.771780  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:19.771795  249491 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:19.771842  249491 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:19.771949  249491 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-063117"
	I1016 18:29:19.771958  249491 addons.go:69] Setting default-storageclass=true in profile "embed-certs-063117"
	I1016 18:29:19.771971  249491 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-063117"
	I1016 18:29:19.771979  249491 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-063117"
	I1016 18:29:19.771979  249491 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:19.772007  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.772413  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.772558  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.776284  249491 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:19.777682  249491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:19.800564  249491 addons.go:238] Setting addon default-storageclass=true in "embed-certs-063117"
	I1016 18:29:19.800668  249491 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:29:19.801165  249491 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:29:19.803130  249491 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:19.804678  249491 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.804699  249491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:19.804856  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.826115  249491 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:19.826138  249491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:19.826207  249491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:29:19.832338  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.861747  249491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:29:19.882221  249491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:19.965940  249491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:19.969094  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:19.987077  249491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:20.101590  249491 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:20.105376  249491 node_ready.go:35] waiting up to 6m0s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:20.328792  249491 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:17.041611  254209 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1016 18:29:17.049911  254209 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:29:17.049971  254209 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:29:17.058089  254209 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:29:17.137219  254209 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:29:17.203085  254209 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1016 18:29:19.418382  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	W1016 18:29:21.918282  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:19.359592  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:19.360042  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:19.360098  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:19.360144  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:19.393040  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:19.393066  228782 cri.go:89] found id: ""
	I1016 18:29:19.393076  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:19.393131  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.397814  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:19.397881  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:19.427286  228782 cri.go:89] found id: ""
	I1016 18:29:19.427314  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.427322  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:19.427327  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:19.427375  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:19.462227  228782 cri.go:89] found id: ""
	I1016 18:29:19.462266  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.462279  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:19.462287  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:19.462348  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:19.496749  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.496774  228782 cri.go:89] found id: ""
	I1016 18:29:19.496783  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:19.496840  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.501521  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:19.501595  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:19.529247  228782 cri.go:89] found id: ""
	I1016 18:29:19.529274  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.529289  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:19.529296  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:19.529359  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:19.564781  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.564804  228782 cri.go:89] found id: ""
	I1016 18:29:19.564814  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:19.564929  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:19.570532  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:19.570606  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:19.604855  228782 cri.go:89] found id: ""
	I1016 18:29:19.604883  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.604893  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:19.604901  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:19.604953  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:19.638992  228782 cri.go:89] found id: ""
	I1016 18:29:19.639022  228782 logs.go:282] 0 containers: []
	W1016 18:29:19.639034  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:19.639045  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:19.639061  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:19.701460  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:19.701505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:19.742847  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:19.742874  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:19.829432  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:19.829906  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:19.877323  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:19.877363  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:20.013993  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:20.014026  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:20.033495  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:20.033528  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:20.125927  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:20.125955  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:20.125979  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.676779  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:22.677325  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:22.677386  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:22.677441  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:22.704967  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:22.704992  228782 cri.go:89] found id: ""
	I1016 18:29:22.705001  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:22.705054  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.709172  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:22.709227  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:22.737460  228782 cri.go:89] found id: ""
	I1016 18:29:22.737488  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.737497  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:22.737502  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:22.737557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:22.765144  228782 cri.go:89] found id: ""
	I1016 18:29:22.765167  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.765174  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:22.765182  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:22.765234  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:22.794804  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:22.794830  228782 cri.go:89] found id: ""
	I1016 18:29:22.794842  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:22.794896  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.799171  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:22.799236  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:22.826223  228782 cri.go:89] found id: ""
	I1016 18:29:22.826245  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.826254  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:22.826262  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:22.826320  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:22.853663  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:22.853687  228782 cri.go:89] found id: ""
	I1016 18:29:22.853697  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:22.853766  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:22.857917  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:22.857976  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:22.886082  228782 cri.go:89] found id: ""
	I1016 18:29:22.886104  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.886111  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:22.886116  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:22.886161  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:22.914756  228782 cri.go:89] found id: ""
	I1016 18:29:22.914785  228782 logs.go:282] 0 containers: []
	W1016 18:29:22.914795  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:22.914806  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:22.914819  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:22.948094  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:22.948123  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:23.063153  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:23.063191  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:23.086210  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:23.086246  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:23.158625  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:23.158644  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:23.158655  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:23.196125  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:23.196164  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:23.249568  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:23.249603  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:23.278700  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:23.278755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:20.330231  249491 addons.go:514] duration metric: took 558.387286ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:20.605751  249491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-063117" context rescaled to 1 replicas
	W1016 18:29:22.108077  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:24.108885  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:23.920030  245371 pod_ready.go:104] pod "coredns-66bc5c9577-ntqqg" is not "Ready", error: <nil>
	I1016 18:29:26.418984  245371 pod_ready.go:94] pod "coredns-66bc5c9577-ntqqg" is "Ready"
	I1016 18:29:26.419015  245371 pod_ready.go:86] duration metric: took 37.506349558s for pod "coredns-66bc5c9577-ntqqg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.421830  245371 pod_ready.go:83] waiting for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.426444  245371 pod_ready.go:94] pod "etcd-no-preload-808539" is "Ready"
	I1016 18:29:26.426468  245371 pod_ready.go:86] duration metric: took 4.611842ms for pod "etcd-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.428754  245371 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.433020  245371 pod_ready.go:94] pod "kube-apiserver-no-preload-808539" is "Ready"
	I1016 18:29:26.433042  245371 pod_ready.go:86] duration metric: took 4.265191ms for pod "kube-apiserver-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.435232  245371 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.616325  245371 pod_ready.go:94] pod "kube-controller-manager-no-preload-808539" is "Ready"
	I1016 18:29:26.616358  245371 pod_ready.go:86] duration metric: took 181.098764ms for pod "kube-controller-manager-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:26.816373  245371 pod_ready.go:83] waiting for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.217098  245371 pod_ready.go:94] pod "kube-proxy-68kl9" is "Ready"
	I1016 18:29:27.217132  245371 pod_ready.go:86] duration metric: took 400.735206ms for pod "kube-proxy-68kl9" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.419792  245371 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816058  245371 pod_ready.go:94] pod "kube-scheduler-no-preload-808539" is "Ready"
	I1016 18:29:27.816084  245371 pod_ready.go:86] duration metric: took 396.261228ms for pod "kube-scheduler-no-preload-808539" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:27.816099  245371 pod_ready.go:40] duration metric: took 38.907119982s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:27.860942  245371 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:27.862530  245371 out.go:179] * Done! kubectl is now configured to use "no-preload-808539" cluster and "default" namespace by default
	I1016 18:29:28.379667  254209 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:29:28.379756  254209 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:29:28.379854  254209 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:29:28.379919  254209 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:29:28.379960  254209 kubeadm.go:318] OS: Linux
	I1016 18:29:28.380039  254209 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:29:28.380108  254209 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:29:28.380162  254209 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:29:28.380210  254209 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:29:28.380249  254209 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:29:28.380302  254209 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:29:28.380342  254209 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:29:28.380378  254209 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:29:28.380440  254209 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:29:28.380523  254209 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:29:28.380601  254209 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:29:28.380687  254209 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:29:28.382133  254209 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:28.382223  254209 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:28.382325  254209 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:28.382409  254209 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:28.382524  254209 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:28.382610  254209 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:29:28.382684  254209 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:29:28.382785  254209 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:29:28.382994  254209 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383094  254209 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:29:28.383267  254209 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-523257 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1016 18:29:28.383368  254209 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:29:28.383477  254209 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:29:28.383518  254209 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:29:28.383588  254209 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:29:28.383656  254209 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:29:28.383737  254209 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:29:28.383814  254209 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:29:28.383912  254209 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:29:28.383990  254209 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:29:28.384065  254209 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:29:28.384119  254209 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:29:28.385312  254209 out.go:252]   - Booting up control plane ...
	I1016 18:29:28.385390  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:29:28.385468  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:29:28.385537  254209 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:29:28.385629  254209 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:29:28.385708  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:29:28.385846  254209 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:29:28.385944  254209 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:29:28.385987  254209 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:29:28.386112  254209 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:29:28.386205  254209 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:28.386257  254209 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501903762s
	I1016 18:29:28.386370  254209 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:29:28.386456  254209 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1016 18:29:28.386534  254209 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:29:28.386605  254209 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:29:28.386709  254209 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.138506302s
	I1016 18:29:28.386833  254209 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.998459766s
	I1016 18:29:28.386943  254209 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001498967s
	I1016 18:29:28.387079  254209 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:29:28.387241  254209 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:29:28.387341  254209 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:29:28.387557  254209 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-523257 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:29:28.387622  254209 kubeadm.go:318] [bootstrap-token] Using token: wqx7bh.ga0ezwq7c18mbgbm
	I1016 18:29:28.388960  254209 out.go:252]   - Configuring RBAC rules ...
	I1016 18:29:28.389058  254209 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:29:28.389159  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:29:28.389377  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:29:28.389512  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:29:28.389640  254209 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:29:28.389787  254209 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:29:28.389938  254209 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:29:28.389981  254209 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:29:28.390023  254209 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:29:28.390028  254209 kubeadm.go:318] 
	I1016 18:29:28.390074  254209 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:29:28.390080  254209 kubeadm.go:318] 
	I1016 18:29:28.390140  254209 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:29:28.390146  254209 kubeadm.go:318] 
	I1016 18:29:28.390170  254209 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:29:28.390217  254209 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:29:28.390266  254209 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:29:28.390275  254209 kubeadm.go:318] 
	I1016 18:29:28.390327  254209 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:29:28.390333  254209 kubeadm.go:318] 
	I1016 18:29:28.390378  254209 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:29:28.390395  254209 kubeadm.go:318] 
	I1016 18:29:28.390444  254209 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:29:28.390542  254209 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:29:28.390666  254209 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:29:28.390676  254209 kubeadm.go:318] 
	I1016 18:29:28.390772  254209 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:29:28.390842  254209 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:29:28.390851  254209 kubeadm.go:318] 
	I1016 18:29:28.390920  254209 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391011  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:29:28.391035  254209 kubeadm.go:318] 	--control-plane 
	I1016 18:29:28.391043  254209 kubeadm.go:318] 
	I1016 18:29:28.391127  254209 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:29:28.391140  254209 kubeadm.go:318] 
	I1016 18:29:28.391228  254209 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token wqx7bh.ga0ezwq7c18mbgbm \
	I1016 18:29:28.391331  254209 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:29:28.391345  254209 cni.go:84] Creating CNI manager for ""
	I1016 18:29:28.391351  254209 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:29:28.392742  254209 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:29:25.836785  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:25.837228  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:25.837274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:25.837338  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:25.864224  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:25.864250  228782 cri.go:89] found id: ""
	I1016 18:29:25.864260  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:25.864307  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.868459  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:25.868525  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:25.894631  228782 cri.go:89] found id: ""
	I1016 18:29:25.894658  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.894671  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:25.894679  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:25.894750  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:25.926152  228782 cri.go:89] found id: ""
	I1016 18:29:25.926179  228782 logs.go:282] 0 containers: []
	W1016 18:29:25.926190  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:25.926198  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:25.926251  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:25.963328  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:25.963355  228782 cri.go:89] found id: ""
	I1016 18:29:25.963365  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:25.963425  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:25.968500  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:25.968557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:26.000655  228782 cri.go:89] found id: ""
	I1016 18:29:26.000684  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.000693  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:26.000701  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:26.000796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:26.033474  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.033497  228782 cri.go:89] found id: ""
	I1016 18:29:26.033505  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:26.033570  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:26.038349  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:26.038413  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:26.069780  228782 cri.go:89] found id: ""
	I1016 18:29:26.069808  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.069818  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:26.069824  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:26.069882  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:26.103136  228782 cri.go:89] found id: ""
	I1016 18:29:26.103171  228782 logs.go:282] 0 containers: []
	W1016 18:29:26.103183  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:26.103201  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:26.103215  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:26.139969  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:26.139999  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:26.208221  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:26.208254  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:26.244473  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:26.244505  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:26.350643  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:26.350676  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:26.369275  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:26.369312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:26.442326  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:26.442349  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:26.442365  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:26.483134  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:26.483169  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.040764  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:29.041151  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:29.041199  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:29.041257  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	W1016 18:29:26.109979  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	W1016 18:29:28.608795  249491 node_ready.go:57] node "embed-certs-063117" has "Ready":"False" status (will retry)
	I1016 18:29:28.393724  254209 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:29:28.398316  254209 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:29:28.398334  254209 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:29:28.412460  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:29:28.630658  254209 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:29:28.630739  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:28.630750  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-523257 minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=default-k8s-diff-port-523257 minikube.k8s.io/primary=true
	I1016 18:29:28.644240  254209 ops.go:34] apiserver oom_adj: -16
	I1016 18:29:28.721533  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.221909  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:29.721810  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.222262  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.721738  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.222945  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:31.722510  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:30.608496  249491 node_ready.go:49] node "embed-certs-063117" is "Ready"
	I1016 18:29:30.608520  249491 node_ready.go:38] duration metric: took 10.503114261s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:29:30.608533  249491 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:29:30.608583  249491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:29:30.621063  249491 api_server.go:72] duration metric: took 10.849240762s to wait for apiserver process to appear ...
	I1016 18:29:30.621089  249491 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:29:30.621109  249491 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:29:30.626152  249491 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1016 18:29:30.627107  249491 api_server.go:141] control plane version: v1.34.1
	I1016 18:29:30.627128  249491 api_server.go:131] duration metric: took 6.033168ms to wait for apiserver health ...
	I1016 18:29:30.627136  249491 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:29:30.630659  249491 system_pods.go:59] 8 kube-system pods found
	I1016 18:29:30.630699  249491 system_pods.go:61] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.630746  249491 system_pods.go:61] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.630759  249491 system_pods.go:61] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.630772  249491 system_pods.go:61] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.630916  249491 system_pods.go:61] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.630926  249491 system_pods.go:61] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.630937  249491 system_pods.go:61] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.630959  249491 system_pods.go:61] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.630971  249491 system_pods.go:74] duration metric: took 3.829293ms to wait for pod list to return data ...
	I1016 18:29:30.630985  249491 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:29:30.633438  249491 default_sa.go:45] found service account: "default"
	I1016 18:29:30.633459  249491 default_sa.go:55] duration metric: took 2.463926ms for default service account to be created ...
	I1016 18:29:30.633469  249491 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:29:30.637230  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.637270  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.637278  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.637286  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.637292  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.637299  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.637308  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.637313  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.637321  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.637342  249491 retry.go:31] will retry after 264.141768ms: missing components: kube-dns
	I1016 18:29:30.905515  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:30.905557  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:29:30.905566  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:30.905573  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:30.905578  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:30.905583  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:30.905586  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:30.905591  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:30.905599  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:29:30.905621  249491 retry.go:31] will retry after 272.815126ms: missing components: kube-dns
	I1016 18:29:31.182959  249491 system_pods.go:86] 8 kube-system pods found
	I1016 18:29:31.182996  249491 system_pods.go:89] "coredns-66bc5c9577-v85b5" [023f2420-4132-430e-90ed-4e7c5533aeeb] Running
	I1016 18:29:31.183004  249491 system_pods.go:89] "etcd-embed-certs-063117" [fd54eaf6-ae80-44ce-a6fe-6fbeeac7ea85] Running
	I1016 18:29:31.183010  249491 system_pods.go:89] "kindnet-9qp8q" [6c45c361-9d61-45f5-9863-a1ceb556db84] Running
	I1016 18:29:31.183016  249491 system_pods.go:89] "kube-apiserver-embed-certs-063117" [a04b20d4-2663-4436-aad1-a1951df32809] Running
	I1016 18:29:31.183023  249491 system_pods.go:89] "kube-controller-manager-embed-certs-063117" [49fb248e-c033-4cc9-b1f0-51c0b60eaa1c] Running
	I1016 18:29:31.183028  249491 system_pods.go:89] "kube-proxy-rsvq2" [7cb8239f-5115-4775-aab6-f0fc7c2dc2fb] Running
	I1016 18:29:31.183034  249491 system_pods.go:89] "kube-scheduler-embed-certs-063117" [28178b78-ce0e-4ad4-b335-3180c4a3e3a3] Running
	I1016 18:29:31.183038  249491 system_pods.go:89] "storage-provisioner" [cc86ca12-3c7b-4447-97a9-b998051c6b68] Running
	I1016 18:29:31.183048  249491 system_pods.go:126] duration metric: took 549.572251ms to wait for k8s-apps to be running ...
	I1016 18:29:31.183057  249491 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:29:31.183107  249491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:29:31.196951  249491 system_svc.go:56] duration metric: took 13.886426ms WaitForService to wait for kubelet
	I1016 18:29:31.196976  249491 kubeadm.go:586] duration metric: took 11.42515893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:29:31.196996  249491 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:29:31.200148  249491 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:29:31.200174  249491 node_conditions.go:123] node cpu capacity is 8
	I1016 18:29:31.200186  249491 node_conditions.go:105] duration metric: took 3.185275ms to run NodePressure ...
	I1016 18:29:31.200197  249491 start.go:241] waiting for startup goroutines ...
	I1016 18:29:31.200203  249491 start.go:246] waiting for cluster config update ...
	I1016 18:29:31.200216  249491 start.go:255] writing updated cluster config ...
	I1016 18:29:31.200464  249491 ssh_runner.go:195] Run: rm -f paused
	I1016 18:29:31.204547  249491 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:31.208677  249491 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.212900  249491 pod_ready.go:94] pod "coredns-66bc5c9577-v85b5" is "Ready"
	I1016 18:29:31.212920  249491 pod_ready.go:86] duration metric: took 4.216559ms for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.214804  249491 pod_ready.go:83] waiting for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.218157  249491 pod_ready.go:94] pod "etcd-embed-certs-063117" is "Ready"
	I1016 18:29:31.218176  249491 pod_ready.go:86] duration metric: took 3.355374ms for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.219965  249491 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.224645  249491 pod_ready.go:94] pod "kube-apiserver-embed-certs-063117" is "Ready"
	I1016 18:29:31.224665  249491 pod_ready.go:86] duration metric: took 4.684934ms for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.226498  249491 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.608777  249491 pod_ready.go:94] pod "kube-controller-manager-embed-certs-063117" is "Ready"
	I1016 18:29:31.608802  249491 pod_ready.go:86] duration metric: took 382.283573ms for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:31.809171  249491 pod_ready.go:83] waiting for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.209404  249491 pod_ready.go:94] pod "kube-proxy-rsvq2" is "Ready"
	I1016 18:29:32.209429  249491 pod_ready.go:86] duration metric: took 400.235447ms for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.410356  249491 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809170  249491 pod_ready.go:94] pod "kube-scheduler-embed-certs-063117" is "Ready"
	I1016 18:29:32.809199  249491 pod_ready.go:86] duration metric: took 398.804528ms for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:29:32.809212  249491 pod_ready.go:40] duration metric: took 1.604631583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:29:32.863208  249491 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:29:32.865029  249491 out.go:179] * Done! kubectl is now configured to use "embed-certs-063117" cluster and "default" namespace by default
	I1016 18:29:32.222199  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:32.721921  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.221579  254209 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:29:33.296163  254209 kubeadm.go:1113] duration metric: took 4.665491695s to wait for elevateKubeSystemPrivileges
	I1016 18:29:33.296194  254209 kubeadm.go:402] duration metric: took 16.35471992s to StartCluster
	I1016 18:29:33.296214  254209 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.296275  254209 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:29:33.298961  254209 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:29:33.299346  254209 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:29:33.299369  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:29:33.299475  254209 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:29:33.299572  254209 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:29:33.299578  254209 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299595  254209 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.299620  254209 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523257"
	I1016 18:29:33.299628  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.299636  254209 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523257"
	I1016 18:29:33.300012  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.300177  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.302171  254209 out.go:179] * Verifying Kubernetes components...
	I1016 18:29:33.304470  254209 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:29:33.332040  254209 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523257"
	I1016 18:29:33.332146  254209 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:29:33.332598  254209 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:29:33.336186  254209 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:29:33.337836  254209 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.337921  254209 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:29:33.338014  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.370804  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.371205  254209 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.371228  254209 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:29:33.371286  254209 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:29:33.396649  254209 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:29:33.405998  254209 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:29:33.480661  254209 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:29:33.493270  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:29:33.508563  254209 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:29:33.588784  254209 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1016 18:29:33.590519  254209 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523257" to be "Ready" ...
	I1016 18:29:33.809245  254209 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:29:29.070288  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.070318  228782 cri.go:89] found id: ""
	I1016 18:29:29.070328  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:29.070383  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.074419  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:29.074490  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:29.101845  228782 cri.go:89] found id: ""
	I1016 18:29:29.101875  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.101886  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:29.101894  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:29.101945  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:29.130198  228782 cri.go:89] found id: ""
	I1016 18:29:29.130243  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.130255  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:29.130267  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:29.130324  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:29.171097  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.171116  228782 cri.go:89] found id: ""
	I1016 18:29:29.171123  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:29.171166  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.175059  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:29.175114  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:29.204192  228782 cri.go:89] found id: ""
	I1016 18:29:29.204217  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.204224  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:29.204229  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:29.204278  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:29.231647  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.231672  228782 cri.go:89] found id: ""
	I1016 18:29:29.231681  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:29.231757  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:29.236497  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:29.236557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:29.266328  228782 cri.go:89] found id: ""
	I1016 18:29:29.266354  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.266365  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:29.266372  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:29.266431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:29.296904  228782 cri.go:89] found id: ""
	I1016 18:29:29.296926  228782 logs.go:282] 0 containers: []
	W1016 18:29:29.296936  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:29.296946  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:29.296957  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:29.389410  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:29.389443  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:29.404894  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:29.404925  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:29.463298  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:29.463323  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:29.463342  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:29.497484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:29.497513  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:29.548374  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:29.548408  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:29.574914  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:29.574946  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:29.630476  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:29.630506  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.164804  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:32.165219  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:32.165273  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:32.165322  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:32.192921  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.192940  228782 cri.go:89] found id: ""
	I1016 18:29:32.192947  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:32.193009  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.197494  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:32.197566  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:32.226679  228782 cri.go:89] found id: ""
	I1016 18:29:32.226706  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.226732  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:32.226740  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:32.226802  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:32.256127  228782 cri.go:89] found id: ""
	I1016 18:29:32.256152  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.256162  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:32.256170  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:32.256231  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:32.286329  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.286351  228782 cri.go:89] found id: ""
	I1016 18:29:32.286361  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:32.286418  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.290615  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:32.290687  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:32.318965  228782 cri.go:89] found id: ""
	I1016 18:29:32.318989  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.318999  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:32.319007  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:32.319086  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:32.349977  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.350001  228782 cri.go:89] found id: ""
	I1016 18:29:32.350011  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:32.350084  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:32.354512  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:32.354578  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:32.381776  228782 cri.go:89] found id: ""
	I1016 18:29:32.381805  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.381814  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:32.381822  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:32.381884  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:32.413298  228782 cri.go:89] found id: ""
	I1016 18:29:32.413324  228782 logs.go:282] 0 containers: []
	W1016 18:29:32.413335  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:32.413347  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:32.413360  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:32.472097  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:32.472114  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:32.472127  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:32.505633  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:32.505661  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:32.555025  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:32.555072  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:32.585744  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:32.585777  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:32.644161  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:32.644194  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:32.676157  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:32.676182  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:32.772828  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:32.772860  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:33.810778  254209 addons.go:514] duration metric: took 511.307538ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:29:34.093650  254209 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-523257" context rescaled to 1 replicas
	W1016 18:29:35.593703  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:29:35.291809  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:35.292347  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:35.292397  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:35.292449  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:35.320203  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.320224  228782 cri.go:89] found id: ""
	I1016 18:29:35.320231  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:35.320276  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.324296  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:35.324356  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:35.351958  228782 cri.go:89] found id: ""
	I1016 18:29:35.351982  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.351990  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:35.352012  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:35.352071  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:35.382337  228782 cri.go:89] found id: ""
	I1016 18:29:35.382364  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.382375  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:35.382382  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:35.382436  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:35.409388  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.409406  228782 cri.go:89] found id: ""
	I1016 18:29:35.409413  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:35.409455  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.413485  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:35.413543  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:35.440778  228782 cri.go:89] found id: ""
	I1016 18:29:35.440804  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.440812  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:35.440820  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:35.440896  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:35.466161  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.466184  228782 cri.go:89] found id: ""
	I1016 18:29:35.466193  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:35.466246  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:35.470498  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:35.470557  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:35.498773  228782 cri.go:89] found id: ""
	I1016 18:29:35.498794  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.498800  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:35.498805  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:35.498850  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:35.525923  228782 cri.go:89] found id: ""
	I1016 18:29:35.525947  228782 logs.go:282] 0 containers: []
	W1016 18:29:35.525956  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:35.525982  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:35.526000  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:35.559484  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:35.559519  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:35.615011  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:35.615051  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:35.642652  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:35.642687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:35.704004  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:35.704038  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:35.736269  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:35.736298  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:35.825956  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:35.825994  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:29:35.841899  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:35.841935  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:35.898506  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.400113  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:29:38.400540  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:29:38.400594  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:38.400649  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:38.427645  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.427665  228782 cri.go:89] found id: ""
	I1016 18:29:38.427674  228782 logs.go:282] 1 containers: [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:38.427732  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.431841  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:38.431910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:38.459141  228782 cri.go:89] found id: ""
	I1016 18:29:38.459165  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.459175  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:38.459182  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:38.459238  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:38.486994  228782 cri.go:89] found id: ""
	I1016 18:29:38.487021  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.487032  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:38.487039  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:38.487100  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:38.514487  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.514508  228782 cri.go:89] found id: ""
	I1016 18:29:38.514515  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:38.514564  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.518661  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:38.518736  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:38.546066  228782 cri.go:89] found id: ""
	I1016 18:29:38.546087  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.546095  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:38.546100  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:38.546154  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:38.574022  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.574039  228782 cri.go:89] found id: ""
	I1016 18:29:38.574045  228782 logs.go:282] 1 containers: [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:38.574087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:38.578237  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:38.578307  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:38.607676  228782 cri.go:89] found id: ""
	I1016 18:29:38.607699  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.607706  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:38.607736  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:38.607796  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:38.635578  228782 cri.go:89] found id: ""
	I1016 18:29:38.635604  228782 logs.go:282] 0 containers: []
	W1016 18:29:38.635615  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:38.635625  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:29:38.635640  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:29:38.694675  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:29:38.694699  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:29:38.694738  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:38.728850  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:38.728879  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:38.780750  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:29:38.780780  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:38.809679  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:29:38.809705  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:29:38.863006  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:29:38.863035  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:29:38.894630  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:29:38.894657  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:29:38.990653  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:29:38.990687  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 16 18:29:30 embed-certs-063117 crio[779]: time="2025-10-16T18:29:30.521476973Z" level=info msg="Starting container: 0a24cc8219f1df02e9ed15137fd3e13546b6938570eae36f0d4fc796b9d7fffb" id=5712982d-8b60-4ea0-a7c6-b02b129c5a19 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:30 embed-certs-063117 crio[779]: time="2025-10-16T18:29:30.523632617Z" level=info msg="Started container" PID=1869 containerID=0a24cc8219f1df02e9ed15137fd3e13546b6938570eae36f0d4fc796b9d7fffb description=kube-system/coredns-66bc5c9577-v85b5/coredns id=5712982d-8b60-4ea0-a7c6-b02b129c5a19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1ec4dbe76c5d90be2ce2ac6ce6555dd9a53bdbf10e2650b6189fa3c97873aed
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.324071156Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f57b3813-3fc2-4328-a713-385271b568a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.324197907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.332836123Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ce43df10f4c7da07dd0b485eb61e9c93496ae48167f7df058553cd958a9108b UID:2f13025d-16dc-4451-9e8d-c37732eb709a NetNS:/var/run/netns/2a8e2877-7ee4-474b-a2bb-6957d56a4aa8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005105a8}] Aliases:map[]}"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.33299473Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.347523057Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ce43df10f4c7da07dd0b485eb61e9c93496ae48167f7df058553cd958a9108b UID:2f13025d-16dc-4451-9e8d-c37732eb709a NetNS:/var/run/netns/2a8e2877-7ee4-474b-a2bb-6957d56a4aa8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005105a8}] Aliases:map[]}"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.34770794Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.353129937Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.357259982Z" level=info msg="Ran pod sandbox 1ce43df10f4c7da07dd0b485eb61e9c93496ae48167f7df058553cd958a9108b with infra container: default/busybox/POD" id=f57b3813-3fc2-4328-a713-385271b568a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.358840771Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ca2bc6be-b86e-4bd3-bbad-d5ca96ac086f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.359108346Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ca2bc6be-b86e-4bd3-bbad-d5ca96ac086f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.359239035Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ca2bc6be-b86e-4bd3-bbad-d5ca96ac086f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.360597049Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4d316b4-b05e-4d24-8ee5-fed1137873aa name=/runtime.v1.ImageService/PullImage
	Oct 16 18:29:33 embed-certs-063117 crio[779]: time="2025-10-16T18:29:33.367366152Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.746379451Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b4d316b4-b05e-4d24-8ee5-fed1137873aa name=/runtime.v1.ImageService/PullImage
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.747200955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a9575ea-928f-4680-bfec-ff38402f507c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.748857894Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=af35583e-1700-4916-b5e1-8e1e53c195c2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.752279269Z" level=info msg="Creating container: default/busybox/busybox" id=dbc63311-5980-4a31-9795-1a47a4e83d3d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.753161674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.757948786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.758343423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.789982565Z" level=info msg="Created container 706597f4fe4b6449b51e10f7d520f3d810475ab31b2413eab2e609e83e589af2: default/busybox/busybox" id=dbc63311-5980-4a31-9795-1a47a4e83d3d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.790695395Z" level=info msg="Starting container: 706597f4fe4b6449b51e10f7d520f3d810475ab31b2413eab2e609e83e589af2" id=a4ef8705-81cf-4af7-b176-2fda18fbda73 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:29:34 embed-certs-063117 crio[779]: time="2025-10-16T18:29:34.792865026Z" level=info msg="Started container" PID=1950 containerID=706597f4fe4b6449b51e10f7d520f3d810475ab31b2413eab2e609e83e589af2 description=default/busybox/busybox id=a4ef8705-81cf-4af7-b176-2fda18fbda73 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ce43df10f4c7da07dd0b485eb61e9c93496ae48167f7df058553cd958a9108b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	706597f4fe4b6       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   1ce43df10f4c7       busybox                                      default
	0a24cc8219f1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   a1ec4dbe76c5d       coredns-66bc5c9577-v85b5                     kube-system
	fc82190beb159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   0915ff4545170       storage-provisioner                          kube-system
	9c84194bc370d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   65cb0c2a4470c       kindnet-9qp8q                                kube-system
	b26de1f7b23b1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   c31606311e96b       kube-proxy-rsvq2                             kube-system
	98355555672d4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   08b33734c3179       etcd-embed-certs-063117                      kube-system
	eb87d87231681       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   2a8b9d4340c58       kube-controller-manager-embed-certs-063117   kube-system
	d6321e5d30cb6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   e2fc90efbd55b       kube-apiserver-embed-certs-063117            kube-system
	280f90d68f6d2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   808a7a3289276       kube-scheduler-embed-certs-063117            kube-system
	
	
	==> coredns [0a24cc8219f1df02e9ed15137fd3e13546b6938570eae36f0d4fc796b9d7fffb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52862 - 61473 "HINFO IN 8531396570613904349.1507952878846123872. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048631992s
	
	
	==> describe nodes <==
	Name:               embed-certs-063117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-063117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-063117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-063117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:29:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:29:34 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:29:34 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:29:34 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:29:34 +0000   Thu, 16 Oct 2025 18:29:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-063117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                70725f86-975b-492e-a584-749604224fc0
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-v85b5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-063117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-9qp8q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-063117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-embed-certs-063117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-rsvq2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-063117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node embed-certs-063117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node embed-certs-063117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node embed-certs-063117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node embed-certs-063117 event: Registered Node embed-certs-063117 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-063117 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [98355555672d42a50c26094fb5fc48c911f74761c8341064eb468e425ce235b3] <==
	{"level":"warn","ts":"2025-10-16T18:29:10.986307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:10.993622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.002198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.009803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.023969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.030272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.036802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:11.078644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:29:11.669518Z","caller":"traceutil/trace.go:172","msg":"trace[509736927] linearizableReadLoop","detail":"{readStateIndex:5; appliedIndex:5; }","duration":"116.117343ms","start":"2025-10-16T18:29:11.553370Z","end":"2025-10-16T18:29:11.669487Z","steps":["trace[509736927] 'read index received'  (duration: 116.109793ms)","trace[509736927] 'applied index is now lower than readState.Index'  (duration: 6.322µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T18:29:11.685514Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.298976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.103.2\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-10-16T18:29:11.685569Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.368014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-16T18:29:11.685587Z","caller":"traceutil/trace.go:172","msg":"trace[758847782] range","detail":"{range_begin:/registry/masterleases/192.168.103.2; range_end:; response_count:0; response_revision:2; }","duration":"135.387703ms","start":"2025-10-16T18:29:11.550183Z","end":"2025-10-16T18:29:11.685571Z","steps":["trace[758847782] 'range keys from in-memory index tree'  (duration: 135.187441ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.685632Z","caller":"traceutil/trace.go:172","msg":"trace[247244938] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:0; response_revision:2; }","duration":"134.423636ms","start":"2025-10-16T18:29:11.551179Z","end":"2025-10-16T18:29:11.685602Z","steps":["trace[247244938] 'range keys from in-memory index tree'  (duration: 134.303663ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:29:11.685514Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.135781ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/embed-certs-063117\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-16T18:29:11.685679Z","caller":"traceutil/trace.go:172","msg":"trace[298488287] range","detail":"{range_begin:/registry/csinodes/embed-certs-063117; range_end:; response_count:0; response_revision:2; }","duration":"132.302069ms","start":"2025-10-16T18:29:11.553366Z","end":"2025-10-16T18:29:11.685668Z","steps":["trace[298488287] 'agreement among raft nodes before linearized reading'  (duration: 116.246398ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.685707Z","caller":"traceutil/trace.go:172","msg":"trace[1428547697] transaction","detail":"{read_only:false; response_revision:4; number_of_response:1; }","duration":"134.505531ms","start":"2025-10-16T18:29:11.551192Z","end":"2025-10-16T18:29:11.685698Z","steps":["trace[1428547697] 'process raft request'  (duration: 134.467014ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.685768Z","caller":"traceutil/trace.go:172","msg":"trace[162593698] transaction","detail":"{read_only:false; response_revision:3; number_of_response:1; }","duration":"134.567859ms","start":"2025-10-16T18:29:11.551182Z","end":"2025-10-16T18:29:11.685750Z","steps":["trace[162593698] 'process raft request'  (duration: 118.376287ms)","trace[162593698] 'compare'  (duration: 15.987575ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:29:11.692198Z","caller":"traceutil/trace.go:172","msg":"trace[1718586548] transaction","detail":"{read_only:false; response_revision:5; number_of_response:1; }","duration":"140.991521ms","start":"2025-10-16T18:29:11.551192Z","end":"2025-10-16T18:29:11.692184Z","steps":["trace[1718586548] 'process raft request'  (duration: 140.881561ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692263Z","caller":"traceutil/trace.go:172","msg":"trace[443429719] transaction","detail":"{read_only:false; response_revision:6; number_of_response:1; }","duration":"141.037191ms","start":"2025-10-16T18:29:11.551216Z","end":"2025-10-16T18:29:11.692254Z","steps":["trace[443429719] 'process raft request'  (duration: 140.913282ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692299Z","caller":"traceutil/trace.go:172","msg":"trace[1714469162] transaction","detail":"{read_only:false; response_revision:9; number_of_response:1; }","duration":"140.917491ms","start":"2025-10-16T18:29:11.551364Z","end":"2025-10-16T18:29:11.692282Z","steps":["trace[1714469162] 'process raft request'  (duration: 140.843307ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692317Z","caller":"traceutil/trace.go:172","msg":"trace[1107597010] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"111.62409ms","start":"2025-10-16T18:29:11.580683Z","end":"2025-10-16T18:29:11.692307Z","steps":["trace[1107597010] 'process raft request'  (duration: 111.596055ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692330Z","caller":"traceutil/trace.go:172","msg":"trace[1089377248] transaction","detail":"{read_only:false; response_revision:7; number_of_response:1; }","duration":"141.039296ms","start":"2025-10-16T18:29:11.551285Z","end":"2025-10-16T18:29:11.692324Z","steps":["trace[1089377248] 'process raft request'  (duration: 140.875812ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692322Z","caller":"traceutil/trace.go:172","msg":"trace[903921389] transaction","detail":"{read_only:false; response_revision:10; number_of_response:1; }","duration":"140.88559ms","start":"2025-10-16T18:29:11.551413Z","end":"2025-10-16T18:29:11.692299Z","steps":["trace[903921389] 'process raft request'  (duration: 140.813154ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692331Z","caller":"traceutil/trace.go:172","msg":"trace[686178840] transaction","detail":"{read_only:false; response_revision:11; number_of_response:1; }","duration":"128.643561ms","start":"2025-10-16T18:29:11.563664Z","end":"2025-10-16T18:29:11.692307Z","steps":["trace[686178840] 'process raft request'  (duration: 128.584808ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:29:11.692350Z","caller":"traceutil/trace.go:172","msg":"trace[1429333816] transaction","detail":"{read_only:false; response_revision:8; number_of_response:1; }","duration":"140.99824ms","start":"2025-10-16T18:29:11.551346Z","end":"2025-10-16T18:29:11.692344Z","steps":["trace[1429333816] 'process raft request'  (duration: 140.839966ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:29:42 up  1:12,  0 user,  load average: 3.64, 2.71, 1.75
	Linux embed-certs-063117 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9c84194bc370d81b03145f5e05c30fff82bf2c260282bf1fc661635d561839c4] <==
	I1016 18:29:19.788699       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:29:19.881565       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:29:19.881742       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:29:19.881758       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:29:19.881782       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:29:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:29:20.035595       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:29:20.035625       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:29:20.035636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:29:20.133913       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:29:20.436399       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:29:20.436431       1 metrics.go:72] Registering metrics
	I1016 18:29:20.436500       1 controller.go:711] "Syncing nftables rules"
	I1016 18:29:30.038818       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:29:30.038873       1 main.go:301] handling current node
	I1016 18:29:40.038095       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:29:40.038138       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6321e5d30cb6346637999d150bb541cebba0e1e777435bb8fbc8ac88aae2932] <==
	I1016 18:29:11.694116       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:29:11.694397       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 18:29:11.695923       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1016 18:29:11.696106       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1016 18:29:11.704070       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:11.704389       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:29:11.898969       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:29:12.456781       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:29:12.462705       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:29:12.462746       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:29:13.013196       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:29:13.053850       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:29:13.194879       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:29:13.203128       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1016 18:29:13.204530       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:29:13.210029       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:29:13.483298       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:29:14.213316       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:29:14.222879       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:29:14.230985       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:29:19.235376       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1016 18:29:19.488865       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:19.495866       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:19.536442       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1016 18:29:41.111524       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:34098: use of closed network connection
	
	
	==> kube-controller-manager [eb87d872316815a00096835277e4c9a3336581216f83c3f9b0aefa426b748208] <==
	I1016 18:29:18.458907       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-063117" podCIDRs=["10.244.0.0/24"]
	I1016 18:29:18.481378       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:29:18.481399       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:29:18.481413       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:29:18.481422       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:29:18.481674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:29:18.482476       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 18:29:18.482506       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 18:29:18.482622       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:29:18.483606       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:29:18.483608       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:29:18.483708       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:29:18.483992       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:29:18.484391       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:29:18.484022       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:29:18.484830       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:29:18.484860       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:29:18.486072       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1016 18:29:18.486667       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:29:18.487204       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:29:18.492655       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:29:18.493749       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:29:18.500003       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:29:18.509637       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:29:33.437121       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b26de1f7b23b1b78d1d50c6c6738b40e6c3bb7f797d9516b66614db9355c6fdf] <==
	I1016 18:29:19.658246       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:29:19.726078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:29:19.827197       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:29:19.828482       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1016 18:29:19.828646       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:29:19.875180       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:29:19.875355       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:29:19.882927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:29:19.883654       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:29:19.883846       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:29:19.886612       1 config.go:200] "Starting service config controller"
	I1016 18:29:19.886674       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:29:19.886726       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:29:19.886754       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:29:19.886786       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:29:19.886808       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:29:19.887614       1 config.go:309] "Starting node config controller"
	I1016 18:29:19.887763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:29:19.887791       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:29:19.986886       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:29:19.986930       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:29:19.986969       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [280f90d68f6d263a9bddbb7a030fb7c1c6c1facb2ed3218fdbaeacdd2df9734e] <==
	E1016 18:29:11.502326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:29:11.502358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:29:11.502449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:29:11.502466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:29:11.502513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:29:11.502512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:29:11.502512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:29:11.502594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:29:11.502637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:29:11.502645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:29:11.502679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:29:12.307208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:29:12.368670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:29:12.391307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:29:12.457082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:29:12.538489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:29:12.563704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:29:12.589112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:29:12.658646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:29:12.690677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:29:12.769391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:29:12.788950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:29:12.834488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:29:12.988756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1016 18:29:14.899657       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:29:15 embed-certs-063117 kubelet[1344]: I1016 18:29:15.106629    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-063117" podStartSLOduration=1.106607178 podStartE2EDuration="1.106607178s" podCreationTimestamp="2025-10-16 18:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:15.094822016 +0000 UTC m=+1.127620375" watchObservedRunningTime="2025-10-16 18:29:15.106607178 +0000 UTC m=+1.139405515"
	Oct 16 18:29:15 embed-certs-063117 kubelet[1344]: I1016 18:29:15.118581    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-063117" podStartSLOduration=1.118557099 podStartE2EDuration="1.118557099s" podCreationTimestamp="2025-10-16 18:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:15.106875502 +0000 UTC m=+1.139673827" watchObservedRunningTime="2025-10-16 18:29:15.118557099 +0000 UTC m=+1.151355434"
	Oct 16 18:29:15 embed-certs-063117 kubelet[1344]: I1016 18:29:15.118775    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-063117" podStartSLOduration=1.118763446 podStartE2EDuration="1.118763446s" podCreationTimestamp="2025-10-16 18:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:15.118653748 +0000 UTC m=+1.151452083" watchObservedRunningTime="2025-10-16 18:29:15.118763446 +0000 UTC m=+1.151561781"
	Oct 16 18:29:15 embed-certs-063117 kubelet[1344]: I1016 18:29:15.134457    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-063117" podStartSLOduration=1.134436551 podStartE2EDuration="1.134436551s" podCreationTimestamp="2025-10-16 18:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:15.132863269 +0000 UTC m=+1.165661586" watchObservedRunningTime="2025-10-16 18:29:15.134436551 +0000 UTC m=+1.167234887"
	Oct 16 18:29:18 embed-certs-063117 kubelet[1344]: I1016 18:29:18.478868    1344 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 16 18:29:18 embed-certs-063117 kubelet[1344]: I1016 18:29:18.479641    1344 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275700    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb8239f-5115-4775-aab6-f0fc7c2dc2fb-xtables-lock\") pod \"kube-proxy-rsvq2\" (UID: \"7cb8239f-5115-4775-aab6-f0fc7c2dc2fb\") " pod="kube-system/kube-proxy-rsvq2"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275773    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbxr\" (UniqueName: \"kubernetes.io/projected/7cb8239f-5115-4775-aab6-f0fc7c2dc2fb-kube-api-access-sbbxr\") pod \"kube-proxy-rsvq2\" (UID: \"7cb8239f-5115-4775-aab6-f0fc7c2dc2fb\") " pod="kube-system/kube-proxy-rsvq2"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275801    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c45c361-9d61-45f5-9863-a1ceb556db84-xtables-lock\") pod \"kindnet-9qp8q\" (UID: \"6c45c361-9d61-45f5-9863-a1ceb556db84\") " pod="kube-system/kindnet-9qp8q"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275828    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c45c361-9d61-45f5-9863-a1ceb556db84-lib-modules\") pod \"kindnet-9qp8q\" (UID: \"6c45c361-9d61-45f5-9863-a1ceb556db84\") " pod="kube-system/kindnet-9qp8q"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275850    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cb8239f-5115-4775-aab6-f0fc7c2dc2fb-kube-proxy\") pod \"kube-proxy-rsvq2\" (UID: \"7cb8239f-5115-4775-aab6-f0fc7c2dc2fb\") " pod="kube-system/kube-proxy-rsvq2"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275871    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb8239f-5115-4775-aab6-f0fc7c2dc2fb-lib-modules\") pod \"kube-proxy-rsvq2\" (UID: \"7cb8239f-5115-4775-aab6-f0fc7c2dc2fb\") " pod="kube-system/kube-proxy-rsvq2"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275937    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6c45c361-9d61-45f5-9863-a1ceb556db84-cni-cfg\") pod \"kindnet-9qp8q\" (UID: \"6c45c361-9d61-45f5-9863-a1ceb556db84\") " pod="kube-system/kindnet-9qp8q"
	Oct 16 18:29:19 embed-certs-063117 kubelet[1344]: I1016 18:29:19.275992    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkch6\" (UniqueName: \"kubernetes.io/projected/6c45c361-9d61-45f5-9863-a1ceb556db84-kube-api-access-gkch6\") pod \"kindnet-9qp8q\" (UID: \"6c45c361-9d61-45f5-9863-a1ceb556db84\") " pod="kube-system/kindnet-9qp8q"
	Oct 16 18:29:20 embed-certs-063117 kubelet[1344]: I1016 18:29:20.128065    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9qp8q" podStartSLOduration=1.128041231 podStartE2EDuration="1.128041231s" podCreationTimestamp="2025-10-16 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:20.105384497 +0000 UTC m=+6.138182832" watchObservedRunningTime="2025-10-16 18:29:20.128041231 +0000 UTC m=+6.160839567"
	Oct 16 18:29:20 embed-certs-063117 kubelet[1344]: I1016 18:29:20.682230    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rsvq2" podStartSLOduration=1.682208076 podStartE2EDuration="1.682208076s" podCreationTimestamp="2025-10-16 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:20.129692123 +0000 UTC m=+6.162490464" watchObservedRunningTime="2025-10-16 18:29:20.682208076 +0000 UTC m=+6.715006412"
	Oct 16 18:29:30 embed-certs-063117 kubelet[1344]: I1016 18:29:30.136967    1344 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:29:30 embed-certs-063117 kubelet[1344]: I1016 18:29:30.261320    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/023f2420-4132-430e-90ed-4e7c5533aeeb-config-volume\") pod \"coredns-66bc5c9577-v85b5\" (UID: \"023f2420-4132-430e-90ed-4e7c5533aeeb\") " pod="kube-system/coredns-66bc5c9577-v85b5"
	Oct 16 18:29:30 embed-certs-063117 kubelet[1344]: I1016 18:29:30.261370    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvstk\" (UniqueName: \"kubernetes.io/projected/023f2420-4132-430e-90ed-4e7c5533aeeb-kube-api-access-pvstk\") pod \"coredns-66bc5c9577-v85b5\" (UID: \"023f2420-4132-430e-90ed-4e7c5533aeeb\") " pod="kube-system/coredns-66bc5c9577-v85b5"
	Oct 16 18:29:30 embed-certs-063117 kubelet[1344]: I1016 18:29:30.261400    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc86ca12-3c7b-4447-97a9-b998051c6b68-tmp\") pod \"storage-provisioner\" (UID: \"cc86ca12-3c7b-4447-97a9-b998051c6b68\") " pod="kube-system/storage-provisioner"
	Oct 16 18:29:30 embed-certs-063117 kubelet[1344]: I1016 18:29:30.261421    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clv4h\" (UniqueName: \"kubernetes.io/projected/cc86ca12-3c7b-4447-97a9-b998051c6b68-kube-api-access-clv4h\") pod \"storage-provisioner\" (UID: \"cc86ca12-3c7b-4447-97a9-b998051c6b68\") " pod="kube-system/storage-provisioner"
	Oct 16 18:29:31 embed-certs-063117 kubelet[1344]: I1016 18:29:31.125127    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.12510608 podStartE2EDuration="11.12510608s" podCreationTimestamp="2025-10-16 18:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:31.124887133 +0000 UTC m=+17.157685468" watchObservedRunningTime="2025-10-16 18:29:31.12510608 +0000 UTC m=+17.157904416"
	Oct 16 18:29:31 embed-certs-063117 kubelet[1344]: I1016 18:29:31.136401    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v85b5" podStartSLOduration=12.13637436 podStartE2EDuration="12.13637436s" podCreationTimestamp="2025-10-16 18:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:31.135820005 +0000 UTC m=+17.168618353" watchObservedRunningTime="2025-10-16 18:29:31.13637436 +0000 UTC m=+17.169172696"
	Oct 16 18:29:33 embed-certs-063117 kubelet[1344]: I1016 18:29:33.077773    1344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7b9\" (UniqueName: \"kubernetes.io/projected/2f13025d-16dc-4451-9e8d-c37732eb709a-kube-api-access-rl7b9\") pod \"busybox\" (UID: \"2f13025d-16dc-4451-9e8d-c37732eb709a\") " pod="default/busybox"
	Oct 16 18:29:35 embed-certs-063117 kubelet[1344]: I1016 18:29:35.136603    1344 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.747928443 podStartE2EDuration="2.136580662s" podCreationTimestamp="2025-10-16 18:29:33 +0000 UTC" firstStartedPulling="2025-10-16 18:29:33.359648003 +0000 UTC m=+19.392446341" lastFinishedPulling="2025-10-16 18:29:34.748300246 +0000 UTC m=+20.781098560" observedRunningTime="2025-10-16 18:29:35.136329891 +0000 UTC m=+21.169128229" watchObservedRunningTime="2025-10-16 18:29:35.136580662 +0000 UTC m=+21.169378997"
	
	
	==> storage-provisioner [fc82190beb1590c10d299fd087cbfcd321de87b62e092c1dee3d92b8814c0bd2] <==
	I1016 18:29:30.529936       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:29:30.539628       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:29:30.539676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:29:30.542018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:30.547881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:30.548069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:29:30.548145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e411a7bc-148f-42cf-bac0-dc17cef1cd44", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-063117_9b5ec37e-3eaf-4256-83e7-cd5e86a8a739 became leader
	I1016 18:29:30.548200       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_9b5ec37e-3eaf-4256-83e7-cd5e86a8a739!
	W1016 18:29:30.550019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:30.553829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:29:30.649315       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_9b5ec37e-3eaf-4256-83e7-cd5e86a8a739!
	W1016 18:29:32.557409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:32.562146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:34.565464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:34.569457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.572244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:36.575872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:38.579229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:38.584046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.587848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:40.591699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.595633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:29:42.603471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-063117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.075584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-794682
helpers_test.go:243: (dbg) docker inspect newest-cni-794682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	        "Created": "2025-10-16T18:29:53.821117165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 263646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:29:53.86093799Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hosts",
	        "LogPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173-json.log",
	        "Name": "/newest-cni-794682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-794682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-794682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	                "LowerDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-794682",
	                "Source": "/var/lib/docker/volumes/newest-cni-794682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-794682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-794682",
	                "name.minikube.sigs.k8s.io": "newest-cni-794682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "42065659033f6f5c6acf8e1016c652f822189f3c2956221795a02d4375038354",
	            "SandboxKey": "/var/run/docker/netns/42065659033f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-794682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:eb:0c:2d:27:e5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e00e8380442887174d300c66955f01f91b4ede1590a4ed3c23c8634e39c04bf",
	                    "EndpointID": "d7062ba6188b7fe826aa62cec2bc4fc9909d9bb3a14ca8096df1cc94fb3080e5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-794682",
	                        "c5fcc0506110"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-794682 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-794682 logs -n 25: (1.257932028s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:27 UTC │
	│ start   │ -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:27 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable metrics-server -p no-preload-808539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ stop    │ -p no-preload-808539 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:01.812240  265507 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:01.812484  265507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:01.812493  265507 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:01.812497  265507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:01.812732  265507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:01.813228  265507 out.go:368] Setting JSON to false
	I1016 18:30:01.814637  265507 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4350,"bootTime":1760635052,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:30:01.814736  265507 start.go:141] virtualization: kvm guest
	I1016 18:30:01.817040  265507 out.go:179] * [embed-certs-063117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:30:01.818618  265507 notify.go:220] Checking for updates...
	I1016 18:30:01.818670  265507 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:30:01.820154  265507 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:01.821559  265507 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:01.823102  265507 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:30:01.824834  265507 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:30:01.826528  265507 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:30:01.828425  265507 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:01.829067  265507 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:01.856846  265507 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:30:01.856938  265507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:01.922694  265507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:01.910190108 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:01.922841  265507 docker.go:318] overlay module found
	I1016 18:30:01.924791  265507 out.go:179] * Using the docker driver based on existing profile
	I1016 18:30:01.926334  265507 start.go:305] selected driver: docker
	I1016 18:30:01.926356  265507 start.go:925] validating driver "docker" against &{Name:embed-certs-063117 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-063117 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:01.926477  265507 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:30:01.927159  265507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:01.997543  265507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:01.984959404 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:01.997858  265507 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:30:01.997893  265507 cni.go:84] Creating CNI manager for ""
	I1016 18:30:01.997938  265507 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:01.997969  265507 start.go:349] cluster config:
	{Name:embed-certs-063117 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-063117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:01.999942  265507 out.go:179] * Starting "embed-certs-063117" primary control-plane node in "embed-certs-063117" cluster
	I1016 18:30:02.001426  265507 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:02.003294  265507 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	W1016 18:29:58.093970  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	W1016 18:30:00.593466  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:30:02.004746  265507 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:02.004793  265507 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:30:02.004793  265507 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:02.004806  265507 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:02.005042  265507 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:30:02.005066  265507 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:30:02.005192  265507 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/config.json ...
	I1016 18:30:02.027691  265507 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:30:02.027710  265507 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:30:02.027741  265507 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:30:02.027775  265507 start.go:360] acquireMachinesLock for embed-certs-063117: {Name:mkbab1db32bf404925228084e4b13c0778c5e2d7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:30:02.027860  265507 start.go:364] duration metric: took 51.119µs to acquireMachinesLock for "embed-certs-063117"
	I1016 18:30:02.027884  265507 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:30:02.027890  265507 fix.go:54] fixHost starting: 
	I1016 18:30:02.028141  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:02.048273  265507 fix.go:112] recreateIfNeeded on embed-certs-063117: state=Stopped err=<nil>
	W1016 18:30:02.048328  265507 fix.go:138] unexpected machine state, will restart: <nil>
	I1016 18:29:59.099478  262895 out.go:252]   - Generating certificates and keys ...
	I1016 18:29:59.099593  262895 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:29:59.099681  262895 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:29:59.242072  262895 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:29:59.746884  262895 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:29:59.789733  262895 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:30:00.575469  262895 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:30:00.831902  262895 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:30:00.832011  262895 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-794682] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1016 18:30:00.918476  262895 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:30:00.918613  262895 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-794682] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1016 18:30:01.038332  262895 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:30:01.175776  262895 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:30:01.419437  262895 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:30:01.419523  262895 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:30:01.561683  262895 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:30:01.987402  262895 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:30:02.325803  262895 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:30:02.553609  262895 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:30:02.991057  262895 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:30:02.991555  262895 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:30:02.995399  262895 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:30:02.997027  262895 out.go:252]   - Booting up control plane ...
	I1016 18:30:02.997126  262895 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:30:02.997217  262895 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:30:02.998835  262895 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:30:03.012796  262895 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:30:03.012931  262895 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:30:03.020572  262895 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:30:03.020961  262895 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:30:03.021034  262895 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:30:03.118196  262895 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:30:03.118351  262895 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:29:59.749773  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1016 18:29:59.749836  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:29:59.749886  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:29:59.777778  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:29:59.777802  228782 cri.go:89] found id: "c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	I1016 18:29:59.777808  228782 cri.go:89] found id: ""
	I1016 18:29:59.777824  228782 logs.go:282] 2 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]
	I1016 18:29:59.777887  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:59.782414  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:59.786413  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:29:59.786465  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:29:59.815353  228782 cri.go:89] found id: ""
	I1016 18:29:59.815375  228782 logs.go:282] 0 containers: []
	W1016 18:29:59.815383  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:29:59.815389  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:29:59.815434  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:29:59.844424  228782 cri.go:89] found id: ""
	I1016 18:29:59.844447  228782 logs.go:282] 0 containers: []
	W1016 18:29:59.844455  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:29:59.844461  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:29:59.844509  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:29:59.872450  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:29:59.872473  228782 cri.go:89] found id: ""
	I1016 18:29:59.872482  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:29:59.872530  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:59.876536  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:29:59.876606  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:29:59.904378  228782 cri.go:89] found id: ""
	I1016 18:29:59.904405  228782 logs.go:282] 0 containers: []
	W1016 18:29:59.904413  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:29:59.904419  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:29:59.904470  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:29:59.932524  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:29:59.932549  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:29:59.932556  228782 cri.go:89] found id: ""
	I1016 18:29:59.932564  228782 logs.go:282] 2 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:29:59.932620  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:59.936979  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:29:59.940890  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:29:59.940959  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:29:59.968794  228782 cri.go:89] found id: ""
	I1016 18:29:59.968817  228782 logs.go:282] 0 containers: []
	W1016 18:29:59.968827  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:29:59.968834  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:29:59.968892  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:29:59.998110  228782 cri.go:89] found id: ""
	I1016 18:29:59.998140  228782 logs.go:282] 0 containers: []
	W1016 18:29:59.998150  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:29:59.998168  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:29:59.998183  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:00.050611  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:00.050644  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:00.080764  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:30:00.080791  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:30:00.112868  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:00.112897  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:00.171326  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:00.171367  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:00.260513  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:00.260548  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:00.276290  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:00.276319  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1016 18:30:03.772160  228782 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (3.495819317s)
	W1016 18:30:03.772193  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45528->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:45528->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1016 18:30:03.772203  228782 logs.go:123] Gathering logs for kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f] ...
	I1016 18:30:03.772217  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	W1016 18:30:03.799490  228782 logs.go:130] failed kube-apiserver [c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f": Process exited with status 1
	stdout:
	
	stderr:
	E1016 18:30:03.796932    5655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f\": container with ID starting with c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f not found: ID does not exist" containerID="c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	time="2025-10-16T18:30:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f\": container with ID starting with c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f not found: ID does not exist"
	 output: 
	** stderr ** 
	E1016 18:30:03.796932    5655 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f\": container with ID starting with c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f not found: ID does not exist" containerID="c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f"
	time="2025-10-16T18:30:03Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f\": container with ID starting with c07b1a4c77511efb754926a16cd346f8d296a6a8011b3101b0c23fe63380dc2f not found: ID does not exist"
	
	** /stderr **
	I1016 18:30:03.799516  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:03.799530  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:03.831429  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:03.831455  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:02.049972  265507 out.go:252] * Restarting existing docker container for "embed-certs-063117" ...
	I1016 18:30:02.050049  265507 cli_runner.go:164] Run: docker start embed-certs-063117
	I1016 18:30:02.314849  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:02.338626  265507 kic.go:430] container "embed-certs-063117" state is running.
	I1016 18:30:02.339029  265507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-063117
	I1016 18:30:02.359774  265507 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/config.json ...
	I1016 18:30:02.360091  265507 machine.go:93] provisionDockerMachine start ...
	I1016 18:30:02.360194  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:02.380574  265507 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:02.380941  265507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1016 18:30:02.380955  265507 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:30:02.381816  265507 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37566->127.0.0.1:33088: read: connection reset by peer
	I1016 18:30:05.534164  265507 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-063117
	
	I1016 18:30:05.534197  265507 ubuntu.go:182] provisioning hostname "embed-certs-063117"
	I1016 18:30:05.534253  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:05.557384  265507 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:05.557685  265507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1016 18:30:05.557711  265507 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-063117 && echo "embed-certs-063117" | sudo tee /etc/hostname
	I1016 18:30:05.715630  265507 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-063117
	
	I1016 18:30:05.715744  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:05.735155  265507 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:05.735365  265507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1016 18:30:05.735383  265507 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-063117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-063117/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-063117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:30:05.875794  265507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:30:05.875826  265507 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:30:05.875857  265507 ubuntu.go:190] setting up certificates
	I1016 18:30:05.875871  265507 provision.go:84] configureAuth start
	I1016 18:30:05.875926  265507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-063117
	I1016 18:30:05.895415  265507 provision.go:143] copyHostCerts
	I1016 18:30:05.895481  265507 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:30:05.895501  265507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:30:05.895591  265507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:30:05.895742  265507 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:30:05.895755  265507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:30:05.895800  265507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:30:05.895900  265507 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:30:05.895910  265507 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:30:05.895955  265507 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:30:05.896043  265507 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.embed-certs-063117 san=[127.0.0.1 192.168.103.2 embed-certs-063117 localhost minikube]
	I1016 18:30:06.145485  265507 provision.go:177] copyRemoteCerts
	I1016 18:30:06.145565  265507 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:30:06.145628  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.170074  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:06.276649  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:30:06.294386  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 18:30:06.311761  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:30:06.330775  265507 provision.go:87] duration metric: took 454.889005ms to configureAuth
	I1016 18:30:06.330806  265507 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:30:06.330968  265507 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:06.331084  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.349971  265507 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:06.350227  265507 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1016 18:30:06.350248  265507 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:30:06.668748  265507 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:30:06.668777  265507 machine.go:96] duration metric: took 4.308663533s to provisionDockerMachine
	I1016 18:30:06.668793  265507 start.go:293] postStartSetup for "embed-certs-063117" (driver="docker")
	I1016 18:30:06.668806  265507 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:30:06.668891  265507 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:30:06.668960  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.690359  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:06.790880  265507 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:30:06.794661  265507 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:30:06.794691  265507 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:30:06.794701  265507 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:30:06.794777  265507 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:30:06.794852  265507 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:30:06.794965  265507 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:30:06.803637  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	W1016 18:30:02.593915  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	W1016 18:30:05.093529  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:30:06.823654  265507 start.go:296] duration metric: took 154.846209ms for postStartSetup
	I1016 18:30:06.823790  265507 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:30:06.823829  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.844514  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:06.940796  265507 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:30:06.945866  265507 fix.go:56] duration metric: took 4.917968407s for fixHost
	I1016 18:30:06.945897  265507 start.go:83] releasing machines lock for "embed-certs-063117", held for 4.918023933s
	I1016 18:30:06.946052  265507 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-063117
	I1016 18:30:06.965969  265507 ssh_runner.go:195] Run: cat /version.json
	I1016 18:30:06.966024  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.966118  265507 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:30:06.966197  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:06.987135  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:06.987668  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:07.155928  265507 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:07.163359  265507 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:30:07.206339  265507 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:30:07.216960  265507 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:30:07.217055  265507 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:30:07.232155  265507 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:30:07.232182  265507 start.go:495] detecting cgroup driver to use...
	I1016 18:30:07.232217  265507 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:30:07.232283  265507 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:30:07.249859  265507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:30:07.265727  265507 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:30:07.265787  265507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:30:07.284679  265507 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:30:07.300647  265507 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:30:07.409180  265507 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:30:07.507307  265507 docker.go:234] disabling docker service ...
	I1016 18:30:07.507370  265507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:30:07.524151  265507 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:30:07.540854  265507 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:30:07.645631  265507 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:30:07.741035  265507 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:30:07.756459  265507 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:30:07.771282  265507 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:30:07.771339  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.781089  265507 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:30:07.781155  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.791261  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.800500  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.809820  265507 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:30:07.818129  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.827452  265507 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.836481  265507 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:07.845811  265507 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:30:07.853909  265507 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:30:07.862207  265507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:07.946759  265507 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:30:08.063006  265507 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:30:08.063073  265507 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:30:08.068166  265507 start.go:563] Will wait 60s for crictl version
	I1016 18:30:08.068229  265507 ssh_runner.go:195] Run: which crictl
	I1016 18:30:08.072595  265507 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:30:08.103252  265507 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:30:08.103341  265507 ssh_runner.go:195] Run: crio --version
	I1016 18:30:08.136522  265507 ssh_runner.go:195] Run: crio --version
	I1016 18:30:04.120394  262895 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001956027s
	I1016 18:30:04.124006  262895 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:30:04.124446  262895 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1016 18:30:04.124583  262895 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:30:04.124691  262895 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:30:05.129213  262895 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.004912383s
	I1016 18:30:06.085502  262895 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.961449125s
	I1016 18:30:08.126458  262895 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002228994s
	I1016 18:30:08.138482  262895 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:30:08.151628  262895 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:30:08.162685  262895 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:30:08.162989  262895 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-794682 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:30:08.173054  262895 kubeadm.go:318] [bootstrap-token] Using token: pyh8xt.4a3wzj866e6fcaz6
	I1016 18:30:08.173227  265507 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:30:08.174973  262895 out.go:252]   - Configuring RBAC rules ...
	I1016 18:30:08.175117  262895 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:30:08.180152  262895 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:30:08.187057  262895 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:30:08.190328  262895 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:30:08.194124  262895 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:30:08.199981  262895 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:30:06.365792  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:06.366155  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:06.366212  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:06.366267  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:06.396206  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:06.396231  228782 cri.go:89] found id: ""
	I1016 18:30:06.396242  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:06.396297  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:06.400700  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:06.400780  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:06.431324  228782 cri.go:89] found id: ""
	I1016 18:30:06.431356  228782 logs.go:282] 0 containers: []
	W1016 18:30:06.431367  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:06.431378  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:06.431431  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:06.459357  228782 cri.go:89] found id: ""
	I1016 18:30:06.459383  228782 logs.go:282] 0 containers: []
	W1016 18:30:06.459390  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:06.459399  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:06.459456  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:06.488031  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:06.488058  228782 cri.go:89] found id: ""
	I1016 18:30:06.488068  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:06.488127  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:06.492238  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:06.492317  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:06.521568  228782 cri.go:89] found id: ""
	I1016 18:30:06.521593  228782 logs.go:282] 0 containers: []
	W1016 18:30:06.521603  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:06.521610  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:06.521663  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:06.552680  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:06.552699  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:30:06.552703  228782 cri.go:89] found id: ""
	I1016 18:30:06.552724  228782 logs.go:282] 2 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:30:06.552784  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:06.556969  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:06.561071  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:06.561145  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:06.591678  228782 cri.go:89] found id: ""
	I1016 18:30:06.591705  228782 logs.go:282] 0 containers: []
	W1016 18:30:06.591739  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:06.591747  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:06.591808  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:06.623497  228782 cri.go:89] found id: ""
	I1016 18:30:06.623526  228782 logs.go:282] 0 containers: []
	W1016 18:30:06.623534  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:06.623545  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:06.623559  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:06.692272  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:06.692292  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:06.692303  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:06.719490  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:06.719519  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:06.777296  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:06.777331  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:06.876384  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:06.876412  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:06.892741  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:06.892771  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:06.925681  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:06.925707  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:06.988873  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:30:06.988908  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:30:07.016612  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:07.016636  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:08.175093  265507 cli_runner.go:164] Run: docker network inspect embed-certs-063117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:08.197116  265507 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1016 18:30:08.201728  265507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:08.213561  265507 kubeadm.go:883] updating cluster {Name:embed-certs-063117 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-063117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:30:08.213674  265507 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:08.213761  265507 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:08.247562  265507 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:08.247589  265507 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:30:08.247641  265507 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:08.274667  265507 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:08.274698  265507 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:30:08.274708  265507 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1016 18:30:08.274878  265507 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-063117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-063117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:30:08.274972  265507 ssh_runner.go:195] Run: crio config
	I1016 18:30:08.325084  265507 cni.go:84] Creating CNI manager for ""
	I1016 18:30:08.325108  265507 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:08.325124  265507 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:30:08.325148  265507 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-063117 NodeName:embed-certs-063117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:30:08.325282  265507 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-063117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:30:08.325340  265507 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:30:08.334566  265507 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:30:08.334645  265507 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:30:08.343129  265507 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1016 18:30:08.357088  265507 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:30:08.370681  265507 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1016 18:30:08.384600  265507 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:30:08.388566  265507 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:08.400096  265507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:08.482530  265507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:08.508450  265507 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117 for IP: 192.168.103.2
	I1016 18:30:08.508474  265507 certs.go:195] generating shared ca certs ...
	I1016 18:30:08.508495  265507 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:08.508657  265507 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:30:08.508759  265507 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:30:08.508775  265507 certs.go:257] generating profile certs ...
	I1016 18:30:08.508883  265507 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/client.key
	I1016 18:30:08.508952  265507 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/apiserver.key.bd95923e
	I1016 18:30:08.509018  265507 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/proxy-client.key
	I1016 18:30:08.509164  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:30:08.509204  265507 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:30:08.509218  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:30:08.509254  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:30:08.509288  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:30:08.509349  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:30:08.509414  265507 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:08.510316  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:30:08.530386  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:30:08.552520  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:30:08.580587  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:30:08.612871  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1016 18:30:08.633312  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:30:08.653779  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:30:08.673659  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/embed-certs-063117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:30:08.693018  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:30:08.713093  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:30:08.735672  265507 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:30:08.756877  265507 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:30:08.771701  265507 ssh_runner.go:195] Run: openssl version
	I1016 18:30:08.779494  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:30:08.792443  265507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:08.799748  265507 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:08.799822  265507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:08.845411  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:30:08.854393  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:30:08.863460  265507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:30:08.867584  265507 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:30:08.867645  265507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:30:08.906397  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:30:08.917552  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:30:08.927858  265507 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:30:08.933351  265507 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:30:08.933408  265507 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:30:08.978660  265507 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:30:08.987704  265507 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:30:08.991872  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:30:09.030892  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:30:09.078104  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:30:09.118471  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:30:09.170574  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:30:09.229253  265507 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:30:09.267899  265507 kubeadm.go:400] StartCluster: {Name:embed-certs-063117 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-063117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:09.268066  265507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:30:09.268169  265507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:30:09.302374  265507 cri.go:89] found id: "3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614"
	I1016 18:30:09.302396  265507 cri.go:89] found id: "121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885"
	I1016 18:30:09.302400  265507 cri.go:89] found id: "06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba"
	I1016 18:30:09.302404  265507 cri.go:89] found id: "2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb"
	I1016 18:30:09.302407  265507 cri.go:89] found id: ""
	I1016 18:30:09.302452  265507 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:30:09.315521  265507 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:09Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:09.315596  265507 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:30:09.324977  265507 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:30:09.324996  265507 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:30:09.325048  265507 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:30:09.334159  265507 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:30:09.334971  265507 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-063117" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:09.335433  265507 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-063117" cluster setting kubeconfig missing "embed-certs-063117" context setting]
	I1016 18:30:09.336246  265507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:09.338040  265507 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:30:09.347433  265507 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1016 18:30:09.347472  265507 kubeadm.go:601] duration metric: took 22.469911ms to restartPrimaryControlPlane
	I1016 18:30:09.347482  265507 kubeadm.go:402] duration metric: took 79.595769ms to StartCluster
	I1016 18:30:09.347501  265507 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:09.347575  265507 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:09.349586  265507 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:09.349896  265507 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:30:09.350025  265507 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:30:09.350141  265507 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-063117"
	I1016 18:30:09.350162  265507 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-063117"
	I1016 18:30:09.350164  265507 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1016 18:30:09.350170  265507 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:30:09.350188  265507 addons.go:69] Setting dashboard=true in profile "embed-certs-063117"
	I1016 18:30:09.350214  265507 addons.go:69] Setting default-storageclass=true in profile "embed-certs-063117"
	I1016 18:30:09.350238  265507 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-063117"
	I1016 18:30:09.350247  265507 addons.go:238] Setting addon dashboard=true in "embed-certs-063117"
	W1016 18:30:09.350258  265507 addons.go:247] addon dashboard should already be in state true
	I1016 18:30:09.350295  265507 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:30:09.350204  265507 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:30:09.350611  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:09.350787  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:09.350793  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:09.352067  265507 out.go:179] * Verifying Kubernetes components...
	I1016 18:30:09.353958  265507 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:09.379263  265507 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:30:09.380686  265507 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 18:30:09.380789  265507 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:09.380942  265507 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:30:09.381006  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:09.381487  265507 addons.go:238] Setting addon default-storageclass=true in "embed-certs-063117"
	W1016 18:30:09.381511  265507 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:30:09.381540  265507 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:30:09.381947  265507 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:09.384425  265507 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 18:30:08.534486  262895 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:30:08.953223  262895 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:30:09.533534  262895 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:30:09.535020  262895 kubeadm.go:318] 
	I1016 18:30:09.535769  262895 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:30:09.535805  262895 kubeadm.go:318] 
	I1016 18:30:09.535912  262895 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:30:09.535920  262895 kubeadm.go:318] 
	I1016 18:30:09.535972  262895 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:30:09.536946  262895 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:30:09.537217  262895 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:30:09.537300  262895 kubeadm.go:318] 
	I1016 18:30:09.537405  262895 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:30:09.537455  262895 kubeadm.go:318] 
	I1016 18:30:09.537562  262895 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:30:09.537610  262895 kubeadm.go:318] 
	I1016 18:30:09.537685  262895 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:30:09.537785  262895 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:30:09.537866  262895 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:30:09.537873  262895 kubeadm.go:318] 
	I1016 18:30:09.537968  262895 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:30:09.538115  262895 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:30:09.538130  262895 kubeadm.go:318] 
	I1016 18:30:09.538226  262895 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token pyh8xt.4a3wzj866e6fcaz6 \
	I1016 18:30:09.538350  262895 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:30:09.538376  262895 kubeadm.go:318] 	--control-plane 
	I1016 18:30:09.538382  262895 kubeadm.go:318] 
	I1016 18:30:09.538476  262895 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:30:09.538481  262895 kubeadm.go:318] 
	I1016 18:30:09.538575  262895 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token pyh8xt.4a3wzj866e6fcaz6 \
	I1016 18:30:09.538687  262895 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:30:09.540645  262895 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:30:09.540818  262895 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:30:09.540845  262895 cni.go:84] Creating CNI manager for ""
	I1016 18:30:09.540854  262895 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:09.543306  262895 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1016 18:30:09.385807  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 18:30:09.385827  265507 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 18:30:09.385905  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:09.418067  265507 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:09.418088  265507 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:30:09.418158  265507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:09.418325  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:09.419986  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:09.447671  265507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:09.527788  265507 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:09.545473  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 18:30:09.545492  265507 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 18:30:09.548486  265507 node_ready.go:35] waiting up to 6m0s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:30:09.549113  265507 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:09.571163  265507 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:09.572326  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 18:30:09.572358  265507 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 18:30:09.600486  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 18:30:09.600511  265507 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 18:30:09.630172  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 18:30:09.630200  265507 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 18:30:09.658757  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 18:30:09.658786  265507 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 18:30:09.685917  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 18:30:09.685945  265507 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 18:30:09.704706  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 18:30:09.704758  265507 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 18:30:09.721518  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 18:30:09.721543  265507 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 18:30:09.740085  265507 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:09.740107  265507 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 18:30:09.755807  265507 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:11.187252  265507 node_ready.go:49] node "embed-certs-063117" is "Ready"
	I1016 18:30:11.187287  265507 node_ready.go:38] duration metric: took 1.638769467s for node "embed-certs-063117" to be "Ready" ...
	I1016 18:30:11.187304  265507 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:30:11.187374  265507 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:30:11.717670  265507 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.168520096s)
	I1016 18:30:11.717765  265507 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.146569378s)
	I1016 18:30:11.717887  265507 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.962041491s)
	I1016 18:30:11.717929  265507 api_server.go:72] duration metric: took 2.367998968s to wait for apiserver process to appear ...
	I1016 18:30:11.717955  265507 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:30:11.717974  265507 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1016 18:30:11.721872  265507 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-063117 addons enable metrics-server
	
	I1016 18:30:11.725170  265507 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:11.725200  265507 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:11.730333  265507 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 18:30:11.731612  265507 addons.go:514] duration metric: took 2.381596053s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1016 18:30:07.094070  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	W1016 18:30:09.096724  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	W1016 18:30:11.098614  254209 node_ready.go:57] node "default-k8s-diff-port-523257" has "Ready":"False" status (will retry)
	I1016 18:30:09.544998  262895 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:30:09.551484  262895 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:30:09.551504  262895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:30:09.570592  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:30:09.890225  262895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:30:09.890308  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:09.890346  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-794682 minikube.k8s.io/updated_at=2025_10_16T18_30_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=newest-cni-794682 minikube.k8s.io/primary=true
	I1016 18:30:09.904406  262895 ops.go:34] apiserver oom_adj: -16
	I1016 18:30:09.983291  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:10.483949  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:10.984141  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:11.483387  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:11.983377  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:12.484254  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:12.983871  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:09.555606  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:09.556030  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:09.556093  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:09.556145  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:09.604259  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:09.604280  228782 cri.go:89] found id: ""
	I1016 18:30:09.604298  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:09.604353  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:09.609538  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:09.609608  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:09.655044  228782 cri.go:89] found id: ""
	I1016 18:30:09.655068  228782 logs.go:282] 0 containers: []
	W1016 18:30:09.655079  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:09.655086  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:09.655136  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:09.699146  228782 cri.go:89] found id: ""
	I1016 18:30:09.699180  228782 logs.go:282] 0 containers: []
	W1016 18:30:09.699191  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:09.699199  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:09.699258  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:09.740703  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:09.740748  228782 cri.go:89] found id: ""
	I1016 18:30:09.740759  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:09.740810  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:09.745855  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:09.745916  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:09.776803  228782 cri.go:89] found id: ""
	I1016 18:30:09.776832  228782 logs.go:282] 0 containers: []
	W1016 18:30:09.776844  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:09.776851  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:09.776909  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:09.820806  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:09.820837  228782 cri.go:89] found id: "69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:30:09.820844  228782 cri.go:89] found id: ""
	I1016 18:30:09.820852  228782 logs.go:282] 2 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e]
	I1016 18:30:09.820907  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:09.828433  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:09.833338  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:09.833403  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:09.866898  228782 cri.go:89] found id: ""
	I1016 18:30:09.866925  228782 logs.go:282] 0 containers: []
	W1016 18:30:09.866940  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:09.866947  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:09.867037  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:09.906406  228782 cri.go:89] found id: ""
	I1016 18:30:09.906439  228782 logs.go:282] 0 containers: []
	W1016 18:30:09.906449  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:09.906468  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:09.906482  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:09.986418  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:09.986453  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:10.091171  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:10.091209  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:10.108372  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:10.108408  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:10.169486  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:10.169507  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:10.169523  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:10.248187  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:10.248232  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:10.283519  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:10.283552  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:10.324178  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:10.324207  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:10.371421  228782 logs.go:123] Gathering logs for kube-controller-manager [69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e] ...
	I1016 18:30:10.371453  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 69e3900a11d38d3ed2db47c8c4e39375c7f573c21fc2544a25b4f6f1cfa81e8e"
	I1016 18:30:12.905782  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:12.906218  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:12.906274  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:12.906321  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:12.935209  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:12.935246  228782 cri.go:89] found id: ""
	I1016 18:30:12.935254  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:12.935305  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:12.940146  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:12.940216  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:12.983618  228782 cri.go:89] found id: ""
	I1016 18:30:12.983649  228782 logs.go:282] 0 containers: []
	W1016 18:30:12.983660  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:12.983668  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:12.983754  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:13.020127  228782 cri.go:89] found id: ""
	I1016 18:30:13.020163  228782 logs.go:282] 0 containers: []
	W1016 18:30:13.020173  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:13.020180  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:13.020241  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:13.057937  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:13.057961  228782 cri.go:89] found id: ""
	I1016 18:30:13.057971  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:13.058039  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:13.063653  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:13.063746  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:13.096906  228782 cri.go:89] found id: ""
	I1016 18:30:13.096932  228782 logs.go:282] 0 containers: []
	W1016 18:30:13.096942  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:13.096950  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:13.097085  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:13.129613  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:13.129639  228782 cri.go:89] found id: ""
	I1016 18:30:13.129649  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:13.129778  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:13.134412  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:13.134479  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:13.165702  228782 cri.go:89] found id: ""
	I1016 18:30:13.165748  228782 logs.go:282] 0 containers: []
	W1016 18:30:13.165759  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:13.165767  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:13.165822  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:13.196435  228782 cri.go:89] found id: ""
	I1016 18:30:13.196464  228782 logs.go:282] 0 containers: []
	W1016 18:30:13.196475  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:13.196485  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:13.196499  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:13.232029  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:13.232054  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:13.335980  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:13.336015  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:13.353540  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:13.353569  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:13.412811  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:13.412834  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:13.412846  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:13.447809  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:13.447837  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:13.504171  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:13.504210  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:13.532391  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:13.532418  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:13.483970  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:13.983631  262895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:30:14.061036  262895 kubeadm.go:1113] duration metric: took 4.170791704s to wait for elevateKubeSystemPrivileges
	I1016 18:30:14.061075  262895 kubeadm.go:402] duration metric: took 15.212321904s to StartCluster
	I1016 18:30:14.061098  262895 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:14.061178  262895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:14.063916  262895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:14.064203  262895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:30:14.064198  262895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:30:14.064229  262895 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:30:14.064455  262895 addons.go:69] Setting default-storageclass=true in profile "newest-cni-794682"
	I1016 18:30:14.064462  262895 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:14.064469  262895 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-794682"
	I1016 18:30:14.064485  262895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-794682"
	I1016 18:30:14.064485  262895 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-794682"
	I1016 18:30:14.064528  262895 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:14.064885  262895 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:14.065136  262895 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:14.065648  262895 out.go:179] * Verifying Kubernetes components...
	I1016 18:30:14.066942  262895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:14.090765  262895 addons.go:238] Setting addon default-storageclass=true in "newest-cni-794682"
	I1016 18:30:14.090813  262895 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:14.091065  262895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:30:14.091276  262895 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:14.092306  262895 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:14.092324  262895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:30:14.092371  262895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:14.124110  262895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:14.126812  262895 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:14.126840  262895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:30:14.126906  262895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:14.150999  262895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:14.177826  262895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:30:14.222204  262895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:14.249294  262895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:14.282795  262895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:14.383000  262895 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1016 18:30:14.384581  262895 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:30:14.384636  262895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:30:14.624693  262895 api_server.go:72] duration metric: took 560.37436ms to wait for apiserver process to appear ...
	I1016 18:30:14.624775  262895 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:30:14.624796  262895 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:14.632020  262895 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:30:14.635108  262895 api_server.go:141] control plane version: v1.34.1
	I1016 18:30:14.635138  262895 api_server.go:131] duration metric: took 10.353536ms to wait for apiserver health ...
	I1016 18:30:14.635149  262895 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:30:14.636526  262895 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:30:14.638589  262895 addons.go:514] duration metric: took 574.361949ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:30:14.640083  262895 system_pods.go:59] 8 kube-system pods found
	I1016 18:30:14.640172  262895 system_pods.go:61] "coredns-66bc5c9577-7k82h" [127d26c2-1922-4ad8-b6cb-a86f9aefc431] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:14.640193  262895 system_pods.go:61] "etcd-newest-cni-794682" [3b93c2af-67b5-49b1-a0d8-0222ed51a01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:30:14.640212  262895 system_pods.go:61] "kindnet-chqrm" [f697f30d-64fa-4695-ae47-0268f2604e30] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:30:14.640222  262895 system_pods.go:61] "kube-apiserver-newest-cni-794682" [e42f2077-4b39-4426-9f1b-67c3faec9f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:30:14.640231  262895 system_pods.go:61] "kube-controller-manager-newest-cni-794682" [29288a90-424a-435b-9fe3-1c4e512c032e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:30:14.640249  262895 system_pods.go:61] "kube-proxy-dvbrk" [15fff10c-5233-4292-8a44-6005c5ad3ff1] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:30:14.640259  262895 system_pods.go:61] "kube-scheduler-newest-cni-794682" [4a6ae32c-791f-4592-bf85-5c9d9fba8c17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:30:14.640277  262895 system_pods.go:61] "storage-provisioner" [5d551025-22ed-4596-b776-7f087cb2cd62] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:14.640287  262895 system_pods.go:74] duration metric: took 5.130331ms to wait for pod list to return data ...
	I1016 18:30:14.640304  262895 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:30:14.643329  262895 default_sa.go:45] found service account: "default"
	I1016 18:30:14.643352  262895 default_sa.go:55] duration metric: took 3.041031ms for default service account to be created ...
	I1016 18:30:14.643364  262895 kubeadm.go:586] duration metric: took 579.048528ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:14.643381  262895 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:30:14.646407  262895 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:30:14.646434  262895 node_conditions.go:123] node cpu capacity is 8
	I1016 18:30:14.646446  262895 node_conditions.go:105] duration metric: took 3.060027ms to run NodePressure ...
	I1016 18:30:14.646459  262895 start.go:241] waiting for startup goroutines ...
	I1016 18:30:14.948505  262895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-794682" context rescaled to 1 replicas
	I1016 18:30:14.948537  262895 start.go:246] waiting for cluster config update ...
	I1016 18:30:14.948551  262895 start.go:255] writing updated cluster config ...
	I1016 18:30:14.948864  262895 ssh_runner.go:195] Run: rm -f paused
	I1016 18:30:15.003537  262895 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:30:15.005536  262895 out.go:179] * Done! kubectl is now configured to use "newest-cni-794682" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.480586823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.484416428Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c8bc2b69-fd09-4604-a36a-70b7e06526f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.485650255Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=52814105-35ab-4738-81a1-e6523443f00e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.486971488Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.487517923Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.487872718Z" level=info msg="Ran pod sandbox 0182067655e2e528e270223e85da8869994b0006048901466f74fbd9ada67440 with infra container: kube-system/kindnet-chqrm/POD" id=c8bc2b69-fd09-4604-a36a-70b7e06526f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.488369018Z" level=info msg="Ran pod sandbox fbbfe47e2fa0571730b2914aa97b8548042f47a4c04c173a4190038ea9644422 with infra container: kube-system/kube-proxy-dvbrk/POD" id=52814105-35ab-4738-81a1-e6523443f00e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.489248478Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e8bc08e5-0679-4f80-bdbd-758f90ac1e6c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.489387758Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1d836fb8-ed46-4755-a340-192e80179674 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.490234331Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=57c10999-32f5-488d-9333-5f8f2301c1e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.490316617Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=86fd3030-0344-4de4-a0d8-98c7cabea572 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.500007939Z" level=info msg="Creating container: kube-system/kindnet-chqrm/kindnet-cni" id=aff7e79f-9ece-4648-9fa6-ec94cc112635 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.501309129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.504133166Z" level=info msg="Creating container: kube-system/kube-proxy-dvbrk/kube-proxy" id=99abc527-cbcc-4247-82ed-a6a650e23c8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.509671725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.512974917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.514554304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.515699672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.517125798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.547559609Z" level=info msg="Created container a7dc0cd391f239ee60355a0265ad454f0cb03ae1868337895c775445d1658229: kube-system/kindnet-chqrm/kindnet-cni" id=aff7e79f-9ece-4648-9fa6-ec94cc112635 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.548880356Z" level=info msg="Starting container: a7dc0cd391f239ee60355a0265ad454f0cb03ae1868337895c775445d1658229" id=aad8c724-ab0e-4662-8f7a-4132c640aa3a name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.551755242Z" level=info msg="Started container" PID=1615 containerID=a7dc0cd391f239ee60355a0265ad454f0cb03ae1868337895c775445d1658229 description=kube-system/kindnet-chqrm/kindnet-cni id=aad8c724-ab0e-4662-8f7a-4132c640aa3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=0182067655e2e528e270223e85da8869994b0006048901466f74fbd9ada67440
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.553292189Z" level=info msg="Created container 1fc1986e26a95aff74e409d39bb803ad3ebc6348f5549ba269e987bdc36b7df0: kube-system/kube-proxy-dvbrk/kube-proxy" id=99abc527-cbcc-4247-82ed-a6a650e23c8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.554277202Z" level=info msg="Starting container: 1fc1986e26a95aff74e409d39bb803ad3ebc6348f5549ba269e987bdc36b7df0" id=40db03a8-3498-4a87-8f7a-663bb97fa0c0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:14 newest-cni-794682 crio[782]: time="2025-10-16T18:30:14.558960734Z" level=info msg="Started container" PID=1616 containerID=1fc1986e26a95aff74e409d39bb803ad3ebc6348f5549ba269e987bdc36b7df0 description=kube-system/kube-proxy-dvbrk/kube-proxy id=40db03a8-3498-4a87-8f7a-663bb97fa0c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fbbfe47e2fa0571730b2914aa97b8548042f47a4c04c173a4190038ea9644422
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1fc1986e26a95       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   fbbfe47e2fa05       kube-proxy-dvbrk                            kube-system
	a7dc0cd391f23       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   0182067655e2e       kindnet-chqrm                               kube-system
	eda281bbd7ef1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   c0f69d473f3be       kube-controller-manager-newest-cni-794682   kube-system
	5b39cd29417bd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   35f07365c47f0       kube-apiserver-newest-cni-794682            kube-system
	d68f2d3c91698       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   66539f3b7a269       etcd-newest-cni-794682                      kube-system
	3ea099a86f3bb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   64361b7e43e20       kube-scheduler-newest-cni-794682            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-794682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-794682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-794682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_30_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:30:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-794682
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:08 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:08 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:08 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 18:30:08 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-794682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                815246ac-cdb2-4d78-ba36-a1b7df678ead
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-794682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-chqrm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-794682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-794682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-dvbrk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-794682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-794682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-794682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-794682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-794682 event: Registered Node newest-cni-794682 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [d68f2d3c91698e9d0a873e98e061fce68abde2644d50460de701a486f7518c79] <==
	{"level":"warn","ts":"2025-10-16T18:30:05.390916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.399496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.408486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.415000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.423213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.429901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.438322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.445223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.453364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.460332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.468101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.478679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.486932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.494232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.501481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.509075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.517820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.525751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.532508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.540169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.546879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.558676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.565942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.574132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:05.627791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:30:16 up  1:12,  0 user,  load average: 2.88, 2.63, 1.76
	Linux newest-cni-794682 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a7dc0cd391f239ee60355a0265ad454f0cb03ae1868337895c775445d1658229] <==
	I1016 18:30:14.747835       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:14.748118       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:30:14.748328       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:14.748352       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:14.748375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:15.043830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:15.044010       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:15.044041       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:15.044237       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:15.444319       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:15.444359       1 metrics.go:72] Registering metrics
	I1016 18:30:15.444451       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5b39cd29417bda9d14ea3998cfd9507af91dcc492e729f2f63f26ec83de88df5] <==
	I1016 18:30:06.127114       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:30:06.127124       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:30:06.128366       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:06.132797       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:06.133807       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:30:06.139010       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:06.139234       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:30:06.154899       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:30:07.032020       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:30:07.037180       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:30:07.037195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:07.611595       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:07.655366       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:07.737848       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:30:07.746561       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1016 18:30:07.747703       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:30:07.752852       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:08.046686       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:08.941586       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:08.952329       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:30:08.960888       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:30:13.951460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:13.956565       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:14.102362       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:30:14.149598       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [eda281bbd7ef13baf9d7e83622c95297dffc929a7933958bd15930950b2f7e08] <==
	I1016 18:30:13.046750       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:30:13.046838       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 18:30:13.046973       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 18:30:13.047056       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-794682"
	I1016 18:30:13.047141       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 18:30:13.047173       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:30:13.049682       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:30:13.050019       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:30:13.050055       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:30:13.050081       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 18:30:13.050117       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:30:13.052150       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:13.052179       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:13.054103       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:13.054184       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1016 18:30:13.057456       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:30:13.057523       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:30:13.058216       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:30:13.058273       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:30:13.058311       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:30:13.058319       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:30:13.058326       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:30:13.067112       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1016 18:30:13.067115       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-794682" podCIDRs=["10.42.0.0/24"]
	I1016 18:30:13.073610       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1fc1986e26a95aff74e409d39bb803ad3ebc6348f5549ba269e987bdc36b7df0] <==
	I1016 18:30:14.616566       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:14.685649       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:14.786149       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:14.786190       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:30:14.786316       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:14.806223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:14.806279       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:14.812223       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:14.813690       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:14.814040       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:14.815271       1 config.go:200] "Starting service config controller"
	I1016 18:30:14.815299       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:14.815314       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:14.815295       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:14.815342       1 config.go:309] "Starting node config controller"
	I1016 18:30:14.815349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:14.815349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:14.815334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:14.915911       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:14.915945       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:30:14.915945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:30:14.915947       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [3ea099a86f3bbb9f1884126812d7fc7dd34f867e14ab952d5a86ce8bc79198a2] <==
	E1016 18:30:06.081768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:30:06.082547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:30:06.082726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:30:06.082876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:30:06.082887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:30:06.082993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:30:06.083081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:30:06.083110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:30:06.083086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:30:06.083405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:30:06.083433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:30:06.900045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:30:06.901276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:30:06.920701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:30:06.951329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:30:06.984739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:30:06.992990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:30:07.227778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:30:07.242036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:30:07.277386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:30:07.306045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:30:07.331351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:30:07.341079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:30:07.440310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1016 18:30:09.580223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.788452    1336 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.820915    1336 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.821382    1336 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.821601    1336 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.821749    1336 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: E1016 18:30:09.836704    1336 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-794682\" already exists" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: E1016 18:30:09.837067    1336 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-794682\" already exists" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: E1016 18:30:09.837328    1336 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-794682\" already exists" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: E1016 18:30:09.837494    1336 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-794682\" already exists" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.852374    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-794682" podStartSLOduration=1.852354787 podStartE2EDuration="1.852354787s" podCreationTimestamp="2025-10-16 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:09.851923866 +0000 UTC m=+1.140150155" watchObservedRunningTime="2025-10-16 18:30:09.852354787 +0000 UTC m=+1.140581056"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.877179    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-794682" podStartSLOduration=1.877155505 podStartE2EDuration="1.877155505s" podCreationTimestamp="2025-10-16 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:09.862089319 +0000 UTC m=+1.150315608" watchObservedRunningTime="2025-10-16 18:30:09.877155505 +0000 UTC m=+1.165381794"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.877421    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-794682" podStartSLOduration=1.87741095 podStartE2EDuration="1.87741095s" podCreationTimestamp="2025-10-16 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:09.876921607 +0000 UTC m=+1.165147895" watchObservedRunningTime="2025-10-16 18:30:09.87741095 +0000 UTC m=+1.165637238"
	Oct 16 18:30:09 newest-cni-794682 kubelet[1336]: I1016 18:30:09.904002    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-794682" podStartSLOduration=1.903915818 podStartE2EDuration="1.903915818s" podCreationTimestamp="2025-10-16 18:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:09.889008202 +0000 UTC m=+1.177234487" watchObservedRunningTime="2025-10-16 18:30:09.903915818 +0000 UTC m=+1.192142137"
	Oct 16 18:30:13 newest-cni-794682 kubelet[1336]: I1016 18:30:13.144256    1336 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 18:30:13 newest-cni-794682 kubelet[1336]: I1016 18:30:13.145114    1336 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220282    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-xtables-lock\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220344    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2fld\" (UniqueName: \"kubernetes.io/projected/f697f30d-64fa-4695-ae47-0268f2604e30-kube-api-access-z2fld\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220376    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-cni-cfg\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220398    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-xtables-lock\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220417    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5s2\" (UniqueName: \"kubernetes.io/projected/15fff10c-5233-4292-8a44-6005c5ad3ff1-kube-api-access-js5s2\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220444    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-lib-modules\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220500    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-lib-modules\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.220566    1336 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15fff10c-5233-4292-8a44-6005c5ad3ff1-kube-proxy\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.950351    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-chqrm" podStartSLOduration=0.949265684 podStartE2EDuration="949.265684ms" podCreationTimestamp="2025-10-16 18:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:14.949140152 +0000 UTC m=+6.237366431" watchObservedRunningTime="2025-10-16 18:30:14.949265684 +0000 UTC m=+6.237491972"
	Oct 16 18:30:14 newest-cni-794682 kubelet[1336]: I1016 18:30:14.951204    1336 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvbrk" podStartSLOduration=0.951176422 podStartE2EDuration="951.176422ms" podCreationTimestamp="2025-10-16 18:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:14.844703685 +0000 UTC m=+6.132929972" watchObservedRunningTime="2025-10-16 18:30:14.951176422 +0000 UTC m=+6.239402710"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-794682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7k82h storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner: exit status 1 (79.411404ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7k82h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.532015ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-523257 describe deploy/metrics-server -n kube-system: exit status 1 (61.806936ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-523257 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-523257
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-523257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	        "Created": "2025-10-16T18:29:11.800479319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:29:11.834464638Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hosts",
	        "LogPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0-json.log",
	        "Name": "/default-k8s-diff-port-523257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-523257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-523257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	                "LowerDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-523257",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-523257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-523257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97186beacdf936d358bb2b9abd7f6e44ad39a4d523b3c502df448a8e3fe67c3c",
	            "SandboxKey": "/var/run/docker/netns/97186beacdf9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-523257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:f3:85:31:94:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18ba3d11487252e3067f2e3b5f472d435c8e0f7e30303d875809bd325d5e3e3d",
	                    "EndpointID": "29f1bec42fddfacc81a92fe5dacd45abb9fe087777e9516305ab83500507c0a9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-523257",
	                        "b0bbc4eeeb33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25: (1.084550973s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ image   │ old-k8s-version-956814 image list --format=json                                                                                                                                                                                               │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p newest-cni-794682 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:21.152584  270736 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:21.152931  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.152943  270736 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:21.152949  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.153283  270736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:21.153985  270736 out.go:368] Setting JSON to false
	I1016 18:30:21.155547  270736 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4369,"bootTime":1760635052,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:30:21.155661  270736 start.go:141] virtualization: kvm guest
	I1016 18:30:21.159830  270736 out.go:179] * [newest-cni-794682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:30:21.164009  270736 notify.go:220] Checking for updates...
	I1016 18:30:21.164046  270736 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:30:21.166051  270736 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:21.167545  270736 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:21.168937  270736 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:30:21.170373  270736 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:30:21.172157  270736 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:30:21.174450  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:21.175151  270736 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:21.206447  270736 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:30:21.206560  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.275675  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.263342732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.275800  270736 docker.go:318] overlay module found
	I1016 18:30:21.280927  270736 out.go:179] * Using the docker driver based on existing profile
	I1016 18:30:21.282522  270736 start.go:305] selected driver: docker
	I1016 18:30:21.282543  270736 start.go:925] validating driver "docker" against &{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.282643  270736 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:30:21.283370  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.345288  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.334490708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.345566  270736 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:21.345589  270736 cni.go:84] Creating CNI manager for ""
	I1016 18:30:21.345634  270736 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:21.345666  270736 start.go:349] cluster config:
	{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.347912  270736 out.go:179] * Starting "newest-cni-794682" primary control-plane node in "newest-cni-794682" cluster
	I1016 18:30:21.349743  270736 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:21.351215  270736 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:30:21.352482  270736 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:21.352534  270736 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:30:21.352557  270736 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:21.352605  270736 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:21.352638  270736 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:30:21.352646  270736 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:30:21.352759  270736 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/config.json ...
	I1016 18:30:21.377574  270736 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:30:21.377594  270736 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:30:21.377609  270736 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:30:21.377632  270736 start.go:360] acquireMachinesLock for newest-cni-794682: {Name:mkc6c572380046cef9b391cb88c87708b2d5d19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:30:21.377760  270736 start.go:364] duration metric: took 81.158µs to acquireMachinesLock for "newest-cni-794682"
	I1016 18:30:21.377786  270736 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:30:21.377793  270736 fix.go:54] fixHost starting: 
	I1016 18:30:21.378064  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:21.399419  270736 fix.go:112] recreateIfNeeded on newest-cni-794682: state=Stopped err=<nil>
	W1016 18:30:21.399455  270736 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 18:30:19.263386  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:21.264000  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:19.444319  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:19.444740  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:19.444797  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:19.444857  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:19.484474  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:19.484498  228782 cri.go:89] found id: ""
	I1016 18:30:19.484508  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:19.484567  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.489338  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:19.489416  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:19.526406  228782 cri.go:89] found id: ""
	I1016 18:30:19.526494  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.526510  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:19.526518  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:19.526576  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:19.565282  228782 cri.go:89] found id: ""
	I1016 18:30:19.565310  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.565321  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:19.565329  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:19.565389  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:19.602445  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:19.602470  228782 cri.go:89] found id: ""
	I1016 18:30:19.602479  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:19.602535  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.607731  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:19.607800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:19.642926  228782 cri.go:89] found id: ""
	I1016 18:30:19.642964  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.642975  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:19.642982  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:19.643045  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:19.677132  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.677156  228782 cri.go:89] found id: ""
	I1016 18:30:19.677165  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:19.677224  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.682584  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:19.682648  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:19.712396  228782 cri.go:89] found id: ""
	I1016 18:30:19.712423  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.712434  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:19.712442  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:19.712492  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:19.741255  228782 cri.go:89] found id: ""
	I1016 18:30:19.741281  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.741292  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:19.741302  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:19.741317  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.776253  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:19.776280  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:19.837597  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:19.837630  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:19.871309  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:19.871335  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:19.963099  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:19.963135  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:19.982694  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:19.982755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:20.068957  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:20.068989  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:20.069004  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:20.117950  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:20.117985  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.697499  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:22.697921  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:22.697978  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:22.698033  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:22.727992  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:22.728011  228782 cri.go:89] found id: ""
	I1016 18:30:22.728019  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:22.728087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.732224  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:22.732294  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:22.761154  228782 cri.go:89] found id: ""
	I1016 18:30:22.761181  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.761191  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:22.761199  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:22.761277  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:22.790807  228782 cri.go:89] found id: ""
	I1016 18:30:22.790834  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.790844  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:22.790852  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:22.790910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:22.818463  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.818482  228782 cri.go:89] found id: ""
	I1016 18:30:22.818489  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:22.818543  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.822771  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:22.822842  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:22.850980  228782 cri.go:89] found id: ""
	I1016 18:30:22.851008  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.851016  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:22.851025  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:22.851081  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:22.879807  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:22.879829  228782 cri.go:89] found id: ""
	I1016 18:30:22.879837  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:22.879891  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.884035  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:22.884101  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:22.910938  228782 cri.go:89] found id: ""
	I1016 18:30:22.910962  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.910971  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:22.910978  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:22.911037  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:22.938572  228782 cri.go:89] found id: ""
	I1016 18:30:22.938600  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.938610  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:22.938621  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:22.938637  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:23.029415  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:23.029447  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:23.044905  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:23.044932  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:23.102676  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:23.102698  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:23.102710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:23.135474  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:23.135510  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:23.188617  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:23.188651  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:23.217050  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:23.217083  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:23.274650  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:23.274680  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 16 18:30:14 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:14.623497855Z" level=info msg="Starting container: b9795e9d81f9673d6541527fa87364cf99a4c0bc78576a2a2aa0a9ec25fff9de" id=3e604ac1-8278-4243-b2de-d5672bbb8710 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:14 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:14.626181264Z" level=info msg="Started container" PID=1860 containerID=b9795e9d81f9673d6541527fa87364cf99a4c0bc78576a2a2aa0a9ec25fff9de description=kube-system/coredns-66bc5c9577-jx8q2/coredns id=3e604ac1-8278-4243-b2de-d5672bbb8710 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8ce8181d8ef635000c59b19a801f133965a9cdd13b7a7872020132ef681b0c0
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.239333586Z" level=info msg="Running pod sandbox: default/busybox/POD" id=520f48bc-0ac5-46da-9098-a86191d59df6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.239457699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.245664279Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:acc5a294c866982462c64fc3ef8ca9e685b2843cdaef882da26e947411358393 UID:e96222c9-4604-49ac-a0f7-6328bfe2f82a NetNS:/var/run/netns/8ded5fba-a383-4bb7-b090-0aa25a28c361 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001567c8}] Aliases:map[]}"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.245887153Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.260072787Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:acc5a294c866982462c64fc3ef8ca9e685b2843cdaef882da26e947411358393 UID:e96222c9-4604-49ac-a0f7-6328bfe2f82a NetNS:/var/run/netns/8ded5fba-a383-4bb7-b090-0aa25a28c361 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001567c8}] Aliases:map[]}"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.260421016Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.261437915Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.263697519Z" level=info msg="Ran pod sandbox acc5a294c866982462c64fc3ef8ca9e685b2843cdaef882da26e947411358393 with infra container: default/busybox/POD" id=520f48bc-0ac5-46da-9098-a86191d59df6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.265426357Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=940ed269-f677-4458-aa98-765c981e9fae name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.265680321Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=940ed269-f677-4458-aa98-765c981e9fae name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.265881814Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=940ed269-f677-4458-aa98-765c981e9fae name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.267791004Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d5c68da8-7a4e-4818-a909-2af132416142 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:30:17 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:17.270137437Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.669259613Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d5c68da8-7a4e-4818-a909-2af132416142 name=/runtime.v1.ImageService/PullImage
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.670054018Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34739f17-13de-41df-8556-8cff3a5c9578 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.673257848Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17fc32f1-ea66-4a1c-97ef-d73ff7b3918f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.678292753Z" level=info msg="Creating container: default/busybox/busybox" id=a9880cef-756f-4458-b7b8-b5de4c7c8612 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.679200472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.684628657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.685248694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.718258636Z" level=info msg="Created container d04c275f26ce139e9498d3dd689cb12d54a82b4ed08e10ad60705a1bfb7864eb: default/busybox/busybox" id=a9880cef-756f-4458-b7b8-b5de4c7c8612 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.719482145Z" level=info msg="Starting container: d04c275f26ce139e9498d3dd689cb12d54a82b4ed08e10ad60705a1bfb7864eb" id=44968fcc-bd0a-44e2-84c6-7e266adf97b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:18 default-k8s-diff-port-523257 crio[774]: time="2025-10-16T18:30:18.721849667Z" level=info msg="Started container" PID=1936 containerID=d04c275f26ce139e9498d3dd689cb12d54a82b4ed08e10ad60705a1bfb7864eb description=default/busybox/busybox id=44968fcc-bd0a-44e2-84c6-7e266adf97b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=acc5a294c866982462c64fc3ef8ca9e685b2843cdaef882da26e947411358393
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	d04c275f26ce1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   acc5a294c8669       busybox                                                default
	b9795e9d81f96       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago       Running             coredns                   0                   a8ce8181d8ef6       coredns-66bc5c9577-jx8q2                               kube-system
	52110da3276fe       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago       Running             storage-provisioner       0                   894ee4c98859c       storage-provisioner                                    kube-system
	bf1faf8801207       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   e4d6513ca1b2b       kindnet-bctzw                                          kube-system
	e583918b2f8e7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   020c2d142cfc1       kube-proxy-hrdcg                                       kube-system
	1d423737ac734       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   9187e23be35f1       kube-scheduler-default-k8s-diff-port-523257            kube-system
	8214efa06c7d9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   a8d24f2b105ef       kube-controller-manager-default-k8s-diff-port-523257   kube-system
	96983c32479f6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   c62bb02457a13       kube-apiserver-default-k8s-diff-port-523257            kube-system
	eb9fc4d5632e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   9366bddae2534       etcd-default-k8s-diff-port-523257                      kube-system
	
	
	==> coredns [b9795e9d81f9673d6541527fa87364cf99a4c0bc78576a2a2aa0a9ec25fff9de] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56070 - 48431 "HINFO IN 9085391044543728061.1509864396999774491. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035731888s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-523257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-523257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-523257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-523257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:14 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:14 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:14 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:30:14 +0000   Thu, 16 Oct 2025 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-523257
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                dd8663d9-3eb1-4047-bb84-b123d51b045c
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jx8q2                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-default-k8s-diff-port-523257                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-bctzw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-523257             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-523257    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-hrdcg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-523257             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node default-k8s-diff-port-523257 event: Registered Node default-k8s-diff-port-523257 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-523257 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [eb9fc4d5632e51bef6d10d8ef56ea25720af729fdfaf2ede1be75c49c35f7735] <==
	{"level":"warn","ts":"2025-10-16T18:29:24.298865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.314935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.322383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.329282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.336561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.343344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.350049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.357430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.365635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.373886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.380869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.388078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.398851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.405408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.411813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.419334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.426586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.433580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.440008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.446450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.452804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.463677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.470118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.476452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:29:24.523249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48416","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:30:26 up  1:12,  0 user,  load average: 3.47, 2.78, 1.81
	Linux default-k8s-diff-port-523257 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bf1faf8801207b64659400e93e428e6a5366c1049c0a39d85d33227ea8d67207] <==
	I1016 18:29:33.475268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:29:33.475513       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 18:29:33.475673       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:29:33.475690       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:29:33.475703       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:29:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:29:33.728370       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:29:33.728397       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:29:33.728410       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:29:33.768530       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1016 18:30:03.679848       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1016 18:30:03.728476       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1016 18:30:03.729817       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1016 18:30:03.769005       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1016 18:30:05.228579       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:05.228611       1 metrics.go:72] Registering metrics
	I1016 18:30:05.228810       1 controller.go:711] "Syncing nftables rules"
	I1016 18:30:13.686872       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:30:13.686928       1 main.go:301] handling current node
	I1016 18:30:23.682825       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:30:23.682895       1 main.go:301] handling current node
	
	
	==> kube-apiserver [96983c32479f60fbf07ced4278c9de9cba2b74894ee1ca08c187b3ae1beb6aec] <==
	I1016 18:29:25.008042       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:29:25.009511       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:29:25.013945       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1016 18:29:25.013995       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:25.020768       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:25.021051       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:29:25.196749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:29:25.912017       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1016 18:29:25.916854       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1016 18:29:25.916876       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:29:26.453612       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:29:26.497487       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:29:26.616390       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1016 18:29:26.623407       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1016 18:29:26.624547       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:29:26.628386       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:29:26.935794       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:29:27.780900       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:29:27.791838       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1016 18:29:27.799411       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1016 18:29:31.987185       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:29:32.741811       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:32.745941       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:29:32.937623       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1016 18:30:25.027000       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:33694: use of closed network connection
	
	
	==> kube-controller-manager [8214efa06c7d973988b7c5d457087e13091503ca2a8440c3b45500aa152a5e03] <==
	I1016 18:29:31.934093       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:29:31.934769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 18:29:31.935973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 18:29:31.936010       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:29:31.936031       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:29:31.936052       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:29:31.936107       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:29:31.936130       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:29:31.936595       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:29:31.936916       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:29:31.936956       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:29:31.936974       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:29:31.936987       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:29:31.937001       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:29:31.937012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 18:29:31.937040       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:29:31.938870       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1016 18:29:31.938964       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1016 18:29:31.939044       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-523257"
	I1016 18:29:31.939110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:29:31.939113       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1016 18:29:31.946413       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:29:31.951687       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:29:31.956900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:30:16.946194       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e583918b2f8e71ca21e4aaf8a5740a5f7cef641b6aff0abb7eea8d84c7747dbb] <==
	I1016 18:29:33.405091       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:29:33.471125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:29:33.572130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:29:33.572178       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 18:29:33.572285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:29:33.599706       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:29:33.599791       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:29:33.606457       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:29:33.606846       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:29:33.607274       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:29:33.609614       1 config.go:309] "Starting node config controller"
	I1016 18:29:33.609638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:29:33.609646       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:29:33.609873       1 config.go:200] "Starting service config controller"
	I1016 18:29:33.609881       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:29:33.609897       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:29:33.609902       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:29:33.609913       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:29:33.609918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:29:33.710977       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:29:33.711009       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:29:33.711018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d423737ac734d6fdbf184b80bcd9de8801f6f5bf66d8fd4a1a9f2492aea2cd2] <==
	E1016 18:29:24.961634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:29:24.961744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:29:24.961744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:29:24.961798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1016 18:29:24.961907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:29:24.962429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:29:24.962502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:29:24.962535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:29:24.962571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:29:24.962581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:29:24.962708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:29:24.962732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:29:24.962792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:29:25.768275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:29:25.779499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:29:25.854068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:29:25.857138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:29:25.929594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:29:25.981182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:29:25.987363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:29:26.116040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:29:26.180555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:29:26.217904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:29:26.222029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1016 18:29:26.657724       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:29:28 default-k8s-diff-port-523257 kubelet[1327]: E1016 18:29:28.642173    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-default-k8s-diff-port-523257\" already exists" pod="kube-system/kube-apiserver-default-k8s-diff-port-523257"
	Oct 16 18:29:28 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:28.667834    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-523257" podStartSLOduration=2.667809987 podStartE2EDuration="2.667809987s" podCreationTimestamp="2025-10-16 18:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:28.658224387 +0000 UTC m=+1.128016135" watchObservedRunningTime="2025-10-16 18:29:28.667809987 +0000 UTC m=+1.137601733"
	Oct 16 18:29:28 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:28.677491    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-523257" podStartSLOduration=2.677466823 podStartE2EDuration="2.677466823s" podCreationTimestamp="2025-10-16 18:29:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:28.668011716 +0000 UTC m=+1.137803461" watchObservedRunningTime="2025-10-16 18:29:28.677466823 +0000 UTC m=+1.147258571"
	Oct 16 18:29:28 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:28.687248    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-523257" podStartSLOduration=1.687225202 podStartE2EDuration="1.687225202s" podCreationTimestamp="2025-10-16 18:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:28.677573936 +0000 UTC m=+1.147365684" watchObservedRunningTime="2025-10-16 18:29:28.687225202 +0000 UTC m=+1.157016951"
	Oct 16 18:29:28 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:28.697892    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-523257" podStartSLOduration=1.6978621280000001 podStartE2EDuration="1.697862128s" podCreationTimestamp="2025-10-16 18:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:28.687311389 +0000 UTC m=+1.157103120" watchObservedRunningTime="2025-10-16 18:29:28.697862128 +0000 UTC m=+1.167653875"
	Oct 16 18:29:31 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:31.941590    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 16 18:29:31 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:31.942343    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033804    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ddde19a-7b12-4815-8e04-38066f73935e-kube-proxy\") pod \"kube-proxy-hrdcg\" (UID: \"2ddde19a-7b12-4815-8e04-38066f73935e\") " pod="kube-system/kube-proxy-hrdcg"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033848    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71883f8-793b-41d1-bbad-1c47e65b7768-xtables-lock\") pod \"kindnet-bctzw\" (UID: \"a71883f8-793b-41d1-bbad-1c47e65b7768\") " pod="kube-system/kindnet-bctzw"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033868    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lhg\" (UniqueName: \"kubernetes.io/projected/a71883f8-793b-41d1-bbad-1c47e65b7768-kube-api-access-f8lhg\") pod \"kindnet-bctzw\" (UID: \"a71883f8-793b-41d1-bbad-1c47e65b7768\") " pod="kube-system/kindnet-bctzw"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033889    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ddde19a-7b12-4815-8e04-38066f73935e-lib-modules\") pod \"kube-proxy-hrdcg\" (UID: \"2ddde19a-7b12-4815-8e04-38066f73935e\") " pod="kube-system/kube-proxy-hrdcg"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033909    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dcl7\" (UniqueName: \"kubernetes.io/projected/2ddde19a-7b12-4815-8e04-38066f73935e-kube-api-access-8dcl7\") pod \"kube-proxy-hrdcg\" (UID: \"2ddde19a-7b12-4815-8e04-38066f73935e\") " pod="kube-system/kube-proxy-hrdcg"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.033988    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a71883f8-793b-41d1-bbad-1c47e65b7768-cni-cfg\") pod \"kindnet-bctzw\" (UID: \"a71883f8-793b-41d1-bbad-1c47e65b7768\") " pod="kube-system/kindnet-bctzw"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.034043    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71883f8-793b-41d1-bbad-1c47e65b7768-lib-modules\") pod \"kindnet-bctzw\" (UID: \"a71883f8-793b-41d1-bbad-1c47e65b7768\") " pod="kube-system/kindnet-bctzw"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.034072    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ddde19a-7b12-4815-8e04-38066f73935e-xtables-lock\") pod \"kube-proxy-hrdcg\" (UID: \"2ddde19a-7b12-4815-8e04-38066f73935e\") " pod="kube-system/kube-proxy-hrdcg"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.655814    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bctzw" podStartSLOduration=1.655792188 podStartE2EDuration="1.655792188s" podCreationTimestamp="2025-10-16 18:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:33.655685942 +0000 UTC m=+6.125477690" watchObservedRunningTime="2025-10-16 18:29:33.655792188 +0000 UTC m=+6.125583938"
	Oct 16 18:29:33 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:29:33.665570    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hrdcg" podStartSLOduration=1.665550471 podStartE2EDuration="1.665550471s" podCreationTimestamp="2025-10-16 18:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:29:33.665427211 +0000 UTC m=+6.135218960" watchObservedRunningTime="2025-10-16 18:29:33.665550471 +0000 UTC m=+6.135342219"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.212204    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.328624    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvj2k\" (UniqueName: \"kubernetes.io/projected/038605d0-574f-4f02-8695-cc80a08e2e43-kube-api-access-jvj2k\") pod \"coredns-66bc5c9577-jx8q2\" (UID: \"038605d0-574f-4f02-8695-cc80a08e2e43\") " pod="kube-system/coredns-66bc5c9577-jx8q2"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.328682    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fls7v\" (UniqueName: \"kubernetes.io/projected/5fa5cdd4-25fd-4a41-9e29-ae166842b3ca-kube-api-access-fls7v\") pod \"storage-provisioner\" (UID: \"5fa5cdd4-25fd-4a41-9e29-ae166842b3ca\") " pod="kube-system/storage-provisioner"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.328710    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/038605d0-574f-4f02-8695-cc80a08e2e43-config-volume\") pod \"coredns-66bc5c9577-jx8q2\" (UID: \"038605d0-574f-4f02-8695-cc80a08e2e43\") " pod="kube-system/coredns-66bc5c9577-jx8q2"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.328815    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5fa5cdd4-25fd-4a41-9e29-ae166842b3ca-tmp\") pod \"storage-provisioner\" (UID: \"5fa5cdd4-25fd-4a41-9e29-ae166842b3ca\") " pod="kube-system/storage-provisioner"
	Oct 16 18:30:14 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:14.751852    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jx8q2" podStartSLOduration=41.751826963 podStartE2EDuration="41.751826963s" podCreationTimestamp="2025-10-16 18:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:14.751683902 +0000 UTC m=+47.221475651" watchObservedRunningTime="2025-10-16 18:30:14.751826963 +0000 UTC m=+47.221618710"
	Oct 16 18:30:16 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:16.926043    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.926015842 podStartE2EDuration="43.926015842s" podCreationTimestamp="2025-10-16 18:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-16 18:30:14.775907942 +0000 UTC m=+47.245699690" watchObservedRunningTime="2025-10-16 18:30:16.926015842 +0000 UTC m=+49.395807590"
	Oct 16 18:30:17 default-k8s-diff-port-523257 kubelet[1327]: I1016 18:30:17.047112    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2twls\" (UniqueName: \"kubernetes.io/projected/e96222c9-4604-49ac-a0f7-6328bfe2f82a-kube-api-access-2twls\") pod \"busybox\" (UID: \"e96222c9-4604-49ac-a0f7-6328bfe2f82a\") " pod="default/busybox"
	
	
	==> storage-provisioner [52110da3276fe9714a55b89f5c9ecb17b9c373d0909cc66a56bb44e0e71e1508] <==
	I1016 18:30:14.634678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:30:14.647860       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:30:14.647916       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:30:14.651386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:14.657335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:30:14.657539       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:30:14.657685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_07ac6c25-2df7-40d0-bb34-ea3d5907674f!
	I1016 18:30:14.657802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9f3852a-feb3-4f6a-a138-16ba01201036", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-523257_07ac6c25-2df7-40d0-bb34-ea3d5907674f became leader
	W1016 18:30:14.660809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:14.664495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:30:14.758825       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_07ac6c25-2df7-40d0-bb34-ea3d5907674f!
	W1016 18:30:16.668115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:16.675505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:18.680538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:18.690813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:20.694916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:20.699060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:22.702916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:22.707129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:24.710457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:24.715609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:26.718926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:26.723507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-794682 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-794682 --alsologtostderr -v=1: exit status 80 (2.313439731s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-794682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:30:32.538153  274382 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:32.538433  274382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:32.538445  274382 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:32.538451  274382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:32.538694  274382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:32.538969  274382 out.go:368] Setting JSON to false
	I1016 18:30:32.538999  274382 mustload.go:65] Loading cluster: newest-cni-794682
	I1016 18:30:32.539379  274382 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:32.539813  274382 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:32.558642  274382 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:32.558967  274382 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:32.621905  274382 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-16 18:30:32.61053387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:32.623373  274382 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-794682 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:30:32.625975  274382 out.go:179] * Pausing node newest-cni-794682 ... 
	I1016 18:30:32.627944  274382 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:32.628255  274382 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:32.628298  274382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:32.649990  274382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:32.746852  274382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:32.761946  274382 pause.go:52] kubelet running: true
	I1016 18:30:32.762020  274382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:32.924415  274382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:32.924500  274382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:33.003257  274382 cri.go:89] found id: "f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05"
	I1016 18:30:33.003282  274382 cri.go:89] found id: "2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6"
	I1016 18:30:33.003289  274382 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:33.003294  274382 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:33.003298  274382 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:33.003303  274382 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:33.003307  274382 cri.go:89] found id: ""
	I1016 18:30:33.003390  274382 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:33.016191  274382 retry.go:31] will retry after 232.550827ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:33Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:33.249694  274382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:33.264577  274382 pause.go:52] kubelet running: false
	I1016 18:30:33.264653  274382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:33.381285  274382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:33.381381  274382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:33.453324  274382 cri.go:89] found id: "f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05"
	I1016 18:30:33.453344  274382 cri.go:89] found id: "2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6"
	I1016 18:30:33.453348  274382 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:33.453351  274382 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:33.453353  274382 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:33.453357  274382 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:33.453360  274382 cri.go:89] found id: ""
	I1016 18:30:33.453408  274382 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:33.466806  274382 retry.go:31] will retry after 259.428599ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:33Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:33.727363  274382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:33.740403  274382 pause.go:52] kubelet running: false
	I1016 18:30:33.740463  274382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:33.854380  274382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:33.854463  274382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:33.923525  274382 cri.go:89] found id: "f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05"
	I1016 18:30:33.923546  274382 cri.go:89] found id: "2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6"
	I1016 18:30:33.923552  274382 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:33.923556  274382 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:33.923561  274382 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:33.923566  274382 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:33.923569  274382 cri.go:89] found id: ""
	I1016 18:30:33.923633  274382 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:33.935506  274382 retry.go:31] will retry after 643.542377ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:33Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:34.579291  274382 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:34.592922  274382 pause.go:52] kubelet running: false
	I1016 18:30:34.593027  274382 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:34.708744  274382 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:34.708841  274382 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:34.778924  274382 cri.go:89] found id: "f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05"
	I1016 18:30:34.778948  274382 cri.go:89] found id: "2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6"
	I1016 18:30:34.778952  274382 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:34.778959  274382 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:34.778961  274382 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:34.778964  274382 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:34.778967  274382 cri.go:89] found id: ""
	I1016 18:30:34.779007  274382 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:34.792915  274382 out.go:203] 
	W1016 18:30:34.794442  274382 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:30:34.794462  274382 out.go:285] * 
	* 
	W1016 18:30:34.798538  274382 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:30:34.800128  274382 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-794682 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-794682
helpers_test.go:243: (dbg) docker inspect newest-cni-794682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	        "Created": "2025-10-16T18:29:53.821117165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:21.430933296Z",
	            "FinishedAt": "2025-10-16T18:30:20.163725788Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hosts",
	        "LogPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173-json.log",
	        "Name": "/newest-cni-794682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-794682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-794682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	                "LowerDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-794682",
	                "Source": "/var/lib/docker/volumes/newest-cni-794682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-794682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-794682",
	                "name.minikube.sigs.k8s.io": "newest-cni-794682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "510d58cac0483c173902bce591bb392c1413dc5154b1ce279d2602c251fc7349",
	            "SandboxKey": "/var/run/docker/netns/510d58cac048",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-794682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f8:9b:c8:dd:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e00e8380442887174d300c66955f01f91b4ede1590a4ed3c23c8634e39c04bf",
	                    "EndpointID": "7648fcd26d0326e4afefa03e9e840318defd1d81f5c0304110282681506fb368",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-794682",
	                        "c5fcc0506110"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682: exit status 2 (313.860793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-794682 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p newest-cni-794682 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-523257 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ image   │ newest-cni-794682 image list --format=json                                                                                                                                                                                                    │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p newest-cni-794682 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:21.152584  270736 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:21.152931  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.152943  270736 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:21.152949  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.153283  270736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:21.153985  270736 out.go:368] Setting JSON to false
	I1016 18:30:21.155547  270736 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4369,"bootTime":1760635052,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:30:21.155661  270736 start.go:141] virtualization: kvm guest
	I1016 18:30:21.159830  270736 out.go:179] * [newest-cni-794682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:30:21.164009  270736 notify.go:220] Checking for updates...
	I1016 18:30:21.164046  270736 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:30:21.166051  270736 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:21.167545  270736 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:21.168937  270736 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:30:21.170373  270736 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:30:21.172157  270736 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:30:21.174450  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:21.175151  270736 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:21.206447  270736 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:30:21.206560  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.275675  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.263342732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.275800  270736 docker.go:318] overlay module found
	I1016 18:30:21.280927  270736 out.go:179] * Using the docker driver based on existing profile
	I1016 18:30:21.282522  270736 start.go:305] selected driver: docker
	I1016 18:30:21.282543  270736 start.go:925] validating driver "docker" against &{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.282643  270736 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:30:21.283370  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.345288  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.334490708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.345566  270736 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:21.345589  270736 cni.go:84] Creating CNI manager for ""
	I1016 18:30:21.345634  270736 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:21.345666  270736 start.go:349] cluster config:
	{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.347912  270736 out.go:179] * Starting "newest-cni-794682" primary control-plane node in "newest-cni-794682" cluster
	I1016 18:30:21.349743  270736 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:21.351215  270736 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:30:21.352482  270736 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:21.352534  270736 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:30:21.352557  270736 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:21.352605  270736 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:21.352638  270736 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:30:21.352646  270736 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:30:21.352759  270736 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/config.json ...
	I1016 18:30:21.377574  270736 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:30:21.377594  270736 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:30:21.377609  270736 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:30:21.377632  270736 start.go:360] acquireMachinesLock for newest-cni-794682: {Name:mkc6c572380046cef9b391cb88c87708b2d5d19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:30:21.377760  270736 start.go:364] duration metric: took 81.158µs to acquireMachinesLock for "newest-cni-794682"
	I1016 18:30:21.377786  270736 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:30:21.377793  270736 fix.go:54] fixHost starting: 
	I1016 18:30:21.378064  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:21.399419  270736 fix.go:112] recreateIfNeeded on newest-cni-794682: state=Stopped err=<nil>
	W1016 18:30:21.399455  270736 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 18:30:19.263386  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:21.264000  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:19.444319  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:19.444740  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:19.444797  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:19.444857  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:19.484474  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:19.484498  228782 cri.go:89] found id: ""
	I1016 18:30:19.484508  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:19.484567  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.489338  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:19.489416  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:19.526406  228782 cri.go:89] found id: ""
	I1016 18:30:19.526494  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.526510  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:19.526518  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:19.526576  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:19.565282  228782 cri.go:89] found id: ""
	I1016 18:30:19.565310  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.565321  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:19.565329  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:19.565389  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:19.602445  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:19.602470  228782 cri.go:89] found id: ""
	I1016 18:30:19.602479  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:19.602535  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.607731  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:19.607800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:19.642926  228782 cri.go:89] found id: ""
	I1016 18:30:19.642964  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.642975  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:19.642982  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:19.643045  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:19.677132  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.677156  228782 cri.go:89] found id: ""
	I1016 18:30:19.677165  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:19.677224  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.682584  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:19.682648  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:19.712396  228782 cri.go:89] found id: ""
	I1016 18:30:19.712423  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.712434  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:19.712442  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:19.712492  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:19.741255  228782 cri.go:89] found id: ""
	I1016 18:30:19.741281  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.741292  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:19.741302  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:19.741317  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.776253  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:19.776280  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:19.837597  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:19.837630  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:19.871309  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:19.871335  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:19.963099  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:19.963135  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:19.982694  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:19.982755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:20.068957  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:20.068989  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:20.069004  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:20.117950  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:20.117985  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.697499  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:22.697921  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:22.697978  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:22.698033  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:22.727992  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:22.728011  228782 cri.go:89] found id: ""
	I1016 18:30:22.728019  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:22.728087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.732224  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:22.732294  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:22.761154  228782 cri.go:89] found id: ""
	I1016 18:30:22.761181  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.761191  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:22.761199  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:22.761277  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:22.790807  228782 cri.go:89] found id: ""
	I1016 18:30:22.790834  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.790844  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:22.790852  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:22.790910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:22.818463  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.818482  228782 cri.go:89] found id: ""
	I1016 18:30:22.818489  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:22.818543  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.822771  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:22.822842  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:22.850980  228782 cri.go:89] found id: ""
	I1016 18:30:22.851008  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.851016  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:22.851025  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:22.851081  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:22.879807  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:22.879829  228782 cri.go:89] found id: ""
	I1016 18:30:22.879837  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:22.879891  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.884035  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:22.884101  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:22.910938  228782 cri.go:89] found id: ""
	I1016 18:30:22.910962  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.910971  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:22.910978  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:22.911037  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:22.938572  228782 cri.go:89] found id: ""
	I1016 18:30:22.938600  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.938610  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:22.938621  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:22.938637  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:23.029415  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:23.029447  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:23.044905  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:23.044932  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:23.102676  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:23.102698  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:23.102710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:23.135474  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:23.135510  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:23.188617  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:23.188651  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:23.217050  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:23.217083  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:23.274650  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:23.274680  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:21.400992  270736 out.go:252] * Restarting existing docker container for "newest-cni-794682" ...
	I1016 18:30:21.401102  270736 cli_runner.go:164] Run: docker start newest-cni-794682
	I1016 18:30:21.673444  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:21.696413  270736 kic.go:430] container "newest-cni-794682" state is running.
	I1016 18:30:21.696902  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:21.721293  270736 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/config.json ...
	I1016 18:30:21.721489  270736 machine.go:93] provisionDockerMachine start ...
	I1016 18:30:21.721565  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:21.740696  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:21.740972  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:21.740992  270736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:30:21.741729  270736 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47254->127.0.0.1:33093: read: connection reset by peer
	I1016 18:30:24.883031  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-794682
	
	I1016 18:30:24.883059  270736 ubuntu.go:182] provisioning hostname "newest-cni-794682"
	I1016 18:30:24.883118  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:24.901950  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:24.902179  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:24.902194  270736 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-794682 && echo "newest-cni-794682" | sudo tee /etc/hostname
	I1016 18:30:25.052844  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-794682
	
	I1016 18:30:25.052908  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.074063  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:25.074361  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:25.074392  270736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-794682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-794682/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-794682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:30:25.213044  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:30:25.213082  270736 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:30:25.213108  270736 ubuntu.go:190] setting up certificates
	I1016 18:30:25.213121  270736 provision.go:84] configureAuth start
	I1016 18:30:25.213178  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:25.233313  270736 provision.go:143] copyHostCerts
	I1016 18:30:25.233382  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:30:25.233398  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:30:25.233499  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:30:25.233659  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:30:25.233673  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:30:25.233738  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:30:25.233826  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:30:25.233838  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:30:25.233878  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:30:25.233953  270736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.newest-cni-794682 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-794682]
	I1016 18:30:25.522191  270736 provision.go:177] copyRemoteCerts
	I1016 18:30:25.522257  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:30:25.522295  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.540519  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:25.640499  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:30:25.659964  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:30:25.679636  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 18:30:25.699356  270736 provision.go:87] duration metric: took 486.222013ms to configureAuth
	I1016 18:30:25.699382  270736 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:30:25.699870  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:25.699984  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.720440  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:25.720675  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:25.720701  270736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:30:26.017237  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:30:26.017263  270736 machine.go:96] duration metric: took 4.29574996s to provisionDockerMachine
	I1016 18:30:26.017277  270736 start.go:293] postStartSetup for "newest-cni-794682" (driver="docker")
	I1016 18:30:26.017292  270736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:30:26.017364  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:30:26.017414  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.042542  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.149164  270736 ssh_runner.go:195] Run: cat /etc/os-release
	W1016 18:30:23.763146  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:25.763423  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:26.153748  270736 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:30:26.153786  270736 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:30:26.153799  270736 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:30:26.153869  270736 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:30:26.153979  270736 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:30:26.154102  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:30:26.163759  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:26.186409  270736 start.go:296] duration metric: took 169.116529ms for postStartSetup
	I1016 18:30:26.186494  270736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:30:26.186541  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.207689  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.307661  270736 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:30:26.312689  270736 fix.go:56] duration metric: took 4.934886255s for fixHost
	I1016 18:30:26.312753  270736 start.go:83] releasing machines lock for "newest-cni-794682", held for 4.93497707s
	I1016 18:30:26.312821  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:26.332943  270736 ssh_runner.go:195] Run: cat /version.json
	I1016 18:30:26.332992  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.333061  270736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:30:26.333152  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.354193  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.356108  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.519378  270736 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:26.527614  270736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:30:26.568056  270736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:30:26.573566  270736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:30:26.573632  270736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:30:26.583220  270736 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:30:26.583241  270736 start.go:495] detecting cgroup driver to use...
	I1016 18:30:26.583271  270736 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:30:26.583310  270736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:30:26.599098  270736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:30:26.613675  270736 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:30:26.613747  270736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:30:26.633808  270736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:30:26.649365  270736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:30:26.743304  270736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:30:26.846320  270736 docker.go:234] disabling docker service ...
	I1016 18:30:26.846382  270736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:30:26.864234  270736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:30:26.879821  270736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:30:26.968850  270736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:30:27.063530  270736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:30:27.079189  270736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:30:27.094469  270736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:30:27.094530  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.104027  270736 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:30:27.104085  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.114598  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.125585  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.135531  270736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:30:27.144618  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.155075  270736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.165398  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.176126  270736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:30:27.185564  270736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:30:27.194613  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:27.290696  270736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:30:27.424850  270736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:30:27.424905  270736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:30:27.429202  270736 start.go:563] Will wait 60s for crictl version
	I1016 18:30:27.429261  270736 ssh_runner.go:195] Run: which crictl
	I1016 18:30:27.433167  270736 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:30:27.460841  270736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:30:27.460932  270736 ssh_runner.go:195] Run: crio --version
	I1016 18:30:27.498214  270736 ssh_runner.go:195] Run: crio --version
	I1016 18:30:27.536523  270736 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:30:27.537910  270736 cli_runner.go:164] Run: docker network inspect newest-cni-794682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:27.557578  270736 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1016 18:30:27.562079  270736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:27.576035  270736 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1016 18:30:27.577664  270736 kubeadm.go:883] updating cluster {Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:30:27.577866  270736 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:27.578007  270736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:27.611936  270736 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:27.611954  270736 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:30:27.611998  270736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:27.639039  270736 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:27.639058  270736 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:30:27.639066  270736 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1016 18:30:27.639162  270736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-794682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:30:27.639226  270736 ssh_runner.go:195] Run: crio config
	I1016 18:30:27.688192  270736 cni.go:84] Creating CNI manager for ""
	I1016 18:30:27.688217  270736 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:27.688234  270736 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1016 18:30:27.688254  270736 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-794682 NodeName:newest-cni-794682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:30:27.688381  270736 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-794682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:30:27.688439  270736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:30:27.697175  270736 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:30:27.697242  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:30:27.705450  270736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 18:30:27.720001  270736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:30:27.733348  270736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1016 18:30:27.746820  270736 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:30:27.750957  270736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:27.762308  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:27.846237  270736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:27.872347  270736 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682 for IP: 192.168.94.2
	I1016 18:30:27.872368  270736 certs.go:195] generating shared ca certs ...
	I1016 18:30:27.872386  270736 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:27.872522  270736 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:30:27.872562  270736 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:30:27.872574  270736 certs.go:257] generating profile certs ...
	I1016 18:30:27.872668  270736 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/client.key
	I1016 18:30:27.872750  270736 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.key.fc2f255e
	I1016 18:30:27.872800  270736 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.key
	I1016 18:30:27.872901  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:30:27.872928  270736 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:30:27.872937  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:30:27.872958  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:30:27.872980  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:30:27.872999  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:30:27.873036  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:27.873698  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:30:27.893567  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:30:27.914471  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:30:27.934468  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:30:27.958051  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 18:30:27.978137  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:30:27.996431  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:30:28.014695  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:30:28.032308  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:30:28.050340  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:30:28.069115  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:30:28.088174  270736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:30:28.100964  270736 ssh_runner.go:195] Run: openssl version
	I1016 18:30:28.107054  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:30:28.115573  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.119475  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.119531  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.155305  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:30:28.163879  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:30:28.172335  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.176254  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.176317  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.211644  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:30:28.220360  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:30:28.229458  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.233520  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.233583  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.270999  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:30:28.279676  270736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:30:28.283817  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:30:28.318660  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:30:28.353995  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:30:28.393326  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:30:28.436750  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:30:28.480217  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:30:28.534156  270736 kubeadm.go:400] StartCluster: {Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:28.534328  270736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:30:28.534392  270736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:30:28.579409  270736 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:28.579431  270736 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:28.579437  270736 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:28.579441  270736 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:28.579445  270736 cri.go:89] found id: ""
	I1016 18:30:28.579484  270736 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:30:28.596443  270736 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:28Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:28.596510  270736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:30:28.607878  270736 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:30:28.607899  270736 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:30:28.607946  270736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:30:28.619147  270736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:30:28.620476  270736 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-794682" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:28.621440  270736 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-794682" cluster setting kubeconfig missing "newest-cni-794682" context setting]
	I1016 18:30:28.622735  270736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.624688  270736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:30:28.635776  270736 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1016 18:30:28.635813  270736 kubeadm.go:601] duration metric: took 27.9081ms to restartPrimaryControlPlane
	I1016 18:30:28.635826  270736 kubeadm.go:402] duration metric: took 101.678552ms to StartCluster
	I1016 18:30:28.635846  270736 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.635915  270736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:28.638086  270736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.638402  270736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:30:28.638633  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:28.638678  270736 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:30:28.638783  270736 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-794682"
	I1016 18:30:28.638806  270736 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-794682"
	W1016 18:30:28.638815  270736 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:30:28.638846  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.638898  270736 addons.go:69] Setting dashboard=true in profile "newest-cni-794682"
	I1016 18:30:28.638925  270736 addons.go:69] Setting default-storageclass=true in profile "newest-cni-794682"
	I1016 18:30:28.638947  270736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-794682"
	I1016 18:30:28.638968  270736 addons.go:238] Setting addon dashboard=true in "newest-cni-794682"
	W1016 18:30:28.638979  270736 addons.go:247] addon dashboard should already be in state true
	I1016 18:30:28.639007  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.639342  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.639432  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.639534  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.641605  270736 out.go:179] * Verifying Kubernetes components...
	I1016 18:30:28.642980  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:28.670617  270736 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:30:28.670684  270736 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 18:30:28.670985  270736 addons.go:238] Setting addon default-storageclass=true in "newest-cni-794682"
	W1016 18:30:28.671005  270736 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:30:28.671032  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.672096  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.672761  270736 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:28.672823  270736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:30:28.672896  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.674592  270736 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 18:30:25.806872  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:25.807310  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:25.807362  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:25.807421  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:25.839834  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:25.839861  228782 cri.go:89] found id: ""
	I1016 18:30:25.839871  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:25.839931  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:25.844258  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:25.844335  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:25.874569  228782 cri.go:89] found id: ""
	I1016 18:30:25.874597  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.874608  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:25.874615  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:25.874672  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:25.904988  228782 cri.go:89] found id: ""
	I1016 18:30:25.905010  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.905017  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:25.905023  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:25.905084  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:25.937008  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:25.937042  228782 cri.go:89] found id: ""
	I1016 18:30:25.937053  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:25.937109  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:25.941845  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:25.941911  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:25.971978  228782 cri.go:89] found id: ""
	I1016 18:30:25.972010  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.972025  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:25.972032  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:25.972091  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:26.003366  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:26.003389  228782 cri.go:89] found id: ""
	I1016 18:30:26.003399  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:26.003452  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:26.009005  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:26.009142  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:26.042590  228782 cri.go:89] found id: ""
	I1016 18:30:26.042608  228782 logs.go:282] 0 containers: []
	W1016 18:30:26.042623  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:26.042630  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:26.042677  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:26.072694  228782 cri.go:89] found id: ""
	I1016 18:30:26.072731  228782 logs.go:282] 0 containers: []
	W1016 18:30:26.072741  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:26.072750  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:26.072763  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:26.133910  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:26.133942  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:26.165627  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:26.165651  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:26.235559  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:26.235590  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:26.269670  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:26.269699  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:26.374015  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:26.374044  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:26.390353  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:26.390391  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:26.450507  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:26.450526  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:26.450540  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:28.988780  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:28.989610  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:28.989690  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:28.989768  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:29.026445  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:29.026472  228782 cri.go:89] found id: ""
	I1016 18:30:29.026482  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:29.026540  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.031759  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:29.031846  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:28.676026  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 18:30:28.676043  270736 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 18:30:28.676135  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.712580  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.713119  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.714947  270736 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:28.714967  270736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:30:28.715029  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.741836  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.800526  270736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:28.822089  270736 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:30:28.822153  270736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:30:28.837899  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:28.839004  270736 api_server.go:72] duration metric: took 200.562784ms to wait for apiserver process to appear ...
	I1016 18:30:28.839051  270736 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:30:28.839071  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:28.839562  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 18:30:28.839584  270736 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 18:30:28.865259  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:28.868423  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 18:30:28.868444  270736 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 18:30:28.891481  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 18:30:28.891605  270736 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 18:30:28.911475  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 18:30:28.911501  270736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 18:30:28.933606  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 18:30:28.933634  270736 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 18:30:28.951512  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 18:30:28.951545  270736 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 18:30:28.969113  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 18:30:28.969137  270736 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 18:30:28.987817  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 18:30:28.987851  270736 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 18:30:29.003978  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:29.003997  270736 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 18:30:29.021823  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:30.328142  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.328170  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.328184  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.352908  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.352943  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.353040  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.362670  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.362701  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.839985  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.845004  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:30.845033  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:30.882306  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044369462s)
	I1016 18:30:30.882347  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.017051365s)
	I1016 18:30:30.882474  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860596066s)
	I1016 18:30:30.884225  270736 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-794682 addons enable metrics-server
	
	I1016 18:30:30.893472  270736 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 18:30:30.894967  270736 addons.go:514] duration metric: took 2.256273765s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1016 18:30:28.263632  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:30.265847  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:31.339909  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:31.344657  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:31.344688  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:31.839332  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:31.845868  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:30:31.846793  270736 api_server.go:141] control plane version: v1.34.1
	I1016 18:30:31.846818  270736 api_server.go:131] duration metric: took 3.007759792s to wait for apiserver health ...
	I1016 18:30:31.846826  270736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:30:31.850233  270736 system_pods.go:59] 8 kube-system pods found
	I1016 18:30:31.850260  270736 system_pods.go:61] "coredns-66bc5c9577-7k82h" [127d26c2-1922-4ad8-b6cb-a86f9aefc431] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:31.850268  270736 system_pods.go:61] "etcd-newest-cni-794682" [3b93c2af-67b5-49b1-a0d8-0222ed51a01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:30:31.850282  270736 system_pods.go:61] "kindnet-chqrm" [f697f30d-64fa-4695-ae47-0268f2604e30] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:30:31.850291  270736 system_pods.go:61] "kube-apiserver-newest-cni-794682" [e42f2077-4b39-4426-9f1b-67c3faec9f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:30:31.850302  270736 system_pods.go:61] "kube-controller-manager-newest-cni-794682" [29288a90-424a-435b-9fe3-1c4e512c032e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:30:31.850312  270736 system_pods.go:61] "kube-proxy-dvbrk" [15fff10c-5233-4292-8a44-6005c5ad3ff1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:30:31.850316  270736 system_pods.go:61] "kube-scheduler-newest-cni-794682" [4a6ae32c-791f-4592-bf85-5c9d9fba8c17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:30:31.850320  270736 system_pods.go:61] "storage-provisioner" [5d551025-22ed-4596-b776-7f087cb2cd62] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:31.850326  270736 system_pods.go:74] duration metric: took 3.49563ms to wait for pod list to return data ...
	I1016 18:30:31.850336  270736 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:30:31.852686  270736 default_sa.go:45] found service account: "default"
	I1016 18:30:31.852704  270736 default_sa.go:55] duration metric: took 2.363021ms for default service account to be created ...
	I1016 18:30:31.852733  270736 kubeadm.go:586] duration metric: took 3.214277249s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:31.852747  270736 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:30:31.854814  270736 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:30:31.854835  270736 node_conditions.go:123] node cpu capacity is 8
	I1016 18:30:31.854847  270736 node_conditions.go:105] duration metric: took 2.096841ms to run NodePressure ...
	I1016 18:30:31.854858  270736 start.go:241] waiting for startup goroutines ...
	I1016 18:30:31.854867  270736 start.go:246] waiting for cluster config update ...
	I1016 18:30:31.854877  270736 start.go:255] writing updated cluster config ...
	I1016 18:30:31.855149  270736 ssh_runner.go:195] Run: rm -f paused
	I1016 18:30:31.903897  270736 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:30:31.906661  270736 out.go:179] * Done! kubectl is now configured to use "newest-cni-794682" cluster and "default" namespace by default
	I1016 18:30:29.066541  228782 cri.go:89] found id: ""
	I1016 18:30:29.066568  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.066579  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:29.066586  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:29.066639  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:29.101207  228782 cri.go:89] found id: ""
	I1016 18:30:29.101257  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.101267  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:29.101279  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:29.101338  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:29.133959  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:29.133985  228782 cri.go:89] found id: ""
	I1016 18:30:29.133995  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:29.134052  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.138831  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:29.138902  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:29.167271  228782 cri.go:89] found id: ""
	I1016 18:30:29.167301  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.167311  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:29.167318  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:29.167381  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:29.196794  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:29.196816  228782 cri.go:89] found id: ""
	I1016 18:30:29.196826  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:29.196884  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.201020  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:29.201089  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:29.233089  228782 cri.go:89] found id: ""
	I1016 18:30:29.233121  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.233131  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:29.233141  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:29.233205  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:29.266067  228782 cri.go:89] found id: ""
	I1016 18:30:29.266095  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.266105  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:29.266114  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:29.266127  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:29.336952  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:29.336988  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:29.371919  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:29.371954  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:29.491901  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:29.491945  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:29.511278  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:29.511312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:29.579955  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:29.579985  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:29.580003  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:29.621236  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:29.621267  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:29.691968  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:29.692014  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:32.227896  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:32.228293  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:32.228345  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:32.228394  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:32.258694  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:32.258729  228782 cri.go:89] found id: ""
	I1016 18:30:32.258740  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:32.258788  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.263551  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:32.263613  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:32.296401  228782 cri.go:89] found id: ""
	I1016 18:30:32.296425  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.296434  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:32.296442  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:32.296497  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:32.326547  228782 cri.go:89] found id: ""
	I1016 18:30:32.326571  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.326582  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:32.326589  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:32.326644  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:32.356936  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:32.356958  228782 cri.go:89] found id: ""
	I1016 18:30:32.356967  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:32.357019  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.362544  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:32.362635  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:32.395505  228782 cri.go:89] found id: ""
	I1016 18:30:32.395532  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.395543  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:32.395551  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:32.395622  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:32.428706  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:32.428763  228782 cri.go:89] found id: ""
	I1016 18:30:32.428774  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:32.428838  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.434349  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:32.434416  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:32.470198  228782 cri.go:89] found id: ""
	I1016 18:30:32.470228  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.470239  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:32.470247  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:32.470301  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:32.504552  228782 cri.go:89] found id: ""
	I1016 18:30:32.504581  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.504591  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:32.504601  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:32.504615  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:32.573672  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:32.573710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:32.612881  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:32.612911  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:32.713049  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:32.713083  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:32.728378  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:32.728404  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:32.791028  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:32.791051  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:32.791067  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:32.833610  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:32.833641  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:32.896053  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:32.896081  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	
	
	==> CRI-O <==
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.248675199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.252250782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5865aaca-34fd-458f-b80a-e56ed6390d50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.252813363Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=722601b9-2140-4082-9997-0d75be1d9803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.253910686Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.254341164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.254517199Z" level=info msg="Ran pod sandbox 8034b8a69a4811885619534c44384fa378509b121712b49b1924a32548398f8c with infra container: kube-system/kindnet-chqrm/POD" id=5865aaca-34fd-458f-b80a-e56ed6390d50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255054612Z" level=info msg="Ran pod sandbox f2920fde5e2f1a6a87cd0079a3708583283cd10755a2b69ef1a6eaade7a33941 with infra container: kube-system/kube-proxy-dvbrk/POD" id=722601b9-2140-4082-9997-0d75be1d9803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255702357Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=043c7751-ee3c-428a-a1bd-019bd59d5aca name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255967959Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f541bdf0-81d8-4496-a3db-ec0f018ce2c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.256625876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c7da9519-57fd-46f4-947e-f17f125f382f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.256883205Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=002c5443-d437-47e7-96f0-5573bed9e830 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.257674872Z" level=info msg="Creating container: kube-system/kindnet-chqrm/kindnet-cni" id=daade1f8-6fbf-487d-bf5f-8419e48f467a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.25778474Z" level=info msg="Creating container: kube-system/kube-proxy-dvbrk/kube-proxy" id=462fb99d-008c-4b01-9980-69a118298cd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.257968435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.258046216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263000129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263704883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263809257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.264334938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.291099066Z" level=info msg="Created container f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05: kube-system/kindnet-chqrm/kindnet-cni" id=daade1f8-6fbf-487d-bf5f-8419e48f467a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.291769057Z" level=info msg="Starting container: f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05" id=78c98d64-fa01-4b0f-a96e-8f942542b463 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.293625661Z" level=info msg="Started container" PID=1044 containerID=f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05 description=kube-system/kindnet-chqrm/kindnet-cni id=78c98d64-fa01-4b0f-a96e-8f942542b463 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8034b8a69a4811885619534c44384fa378509b121712b49b1924a32548398f8c
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.294182891Z" level=info msg="Created container 2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6: kube-system/kube-proxy-dvbrk/kube-proxy" id=462fb99d-008c-4b01-9980-69a118298cd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.294902966Z" level=info msg="Starting container: 2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6" id=b6a6a56a-32e0-4ff4-9e82-be21d7a65314 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.297826472Z" level=info msg="Started container" PID=1045 containerID=2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6 description=kube-system/kube-proxy-dvbrk/kube-proxy id=b6a6a56a-32e0-4ff4-9e82-be21d7a65314 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2920fde5e2f1a6a87cd0079a3708583283cd10755a2b69ef1a6eaade7a33941
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f22bad01be6a3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   8034b8a69a481       kindnet-chqrm                               kube-system
	2626af93c90a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   f2920fde5e2f1       kube-proxy-dvbrk                            kube-system
	ab9094c30ff22       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   b9d339bf057ec       kube-controller-manager-newest-cni-794682   kube-system
	91c42f392b907       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   55a91e8bc84f1       etcd-newest-cni-794682                      kube-system
	e35972d82e9c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   5694790785f1e       kube-scheduler-newest-cni-794682            kube-system
	91494023e5e1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   447cfbb10e532       kube-apiserver-newest-cni-794682            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-794682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-794682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-794682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_30_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:30:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-794682
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-794682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                815246ac-cdb2-4d78-ba36-a1b7df678ead
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-794682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-chqrm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-794682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-794682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-dvbrk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-794682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 21s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 27s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s              kubelet          Node newest-cni-794682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s              kubelet          Node newest-cni-794682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s              kubelet          Node newest-cni-794682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s              node-controller  Node newest-cni-794682 event: Registered Node newest-cni-794682 in Controller
	  Normal  Starting                 8s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)  kubelet          Node newest-cni-794682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)  kubelet          Node newest-cni-794682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 8s)  kubelet          Node newest-cni-794682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-794682 event: Registered Node newest-cni-794682 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14] <==
	{"level":"warn","ts":"2025-10-16T18:30:29.710157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.718168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.728695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.743122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.749923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.756585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.764025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.771870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.778581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.785774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.792527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.799208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.805678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.813396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.826846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.835023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.842257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.849448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.856435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.863276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.870820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.882521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.890306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.896605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.941346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:30:35 up  1:13,  0 user,  load average: 5.13, 3.16, 1.95
	Linux newest-cni-794682 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05] <==
	I1016 18:30:31.543532       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:31.543810       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:30:31.543943       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:31.543963       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:31.543973       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:31.744862       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:31.744928       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:31.744941       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:31.745099       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:32.045228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:32.045254       1 metrics.go:72] Registering metrics
	I1016 18:30:32.045309       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861] <==
	I1016 18:30:30.420798       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:30.421081       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 18:30:30.421363       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:30:30.421439       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 18:30:30.421409       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:30:30.421850       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:30:30.421868       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:30:30.421875       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:30:30.423900       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:30:30.427687       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 18:30:30.429870       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:30.433977       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:30:30.434238       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:30.450509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:30:30.680631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:30.710511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:30.730924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:30.739281       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:30.747825       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:30.786641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.42.49"}
	I1016 18:30:30.798427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.209.126"}
	I1016 18:30:31.324906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:34.175551       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:30:34.226223       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:34.276471       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf] <==
	I1016 18:30:33.748248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:30:33.752439       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:30:33.772098       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:30:33.772221       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:30:33.772243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:30:33.772260       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:30:33.772638       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:30:33.772398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:30:33.772392       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:30:33.773113       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 18:30:33.773203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 18:30:33.773528       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 18:30:33.774697       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:30:33.774792       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:33.774808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 18:30:33.774829       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:30:33.776870       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:30:33.777440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:33.777507       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:33.782602       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:30:33.784960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:33.787451       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:30:33.790672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:30:33.792966       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:30:33.796368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6] <==
	I1016 18:30:31.336399       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:31.390219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:31.490669       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:31.490774       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:30:31.490882       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:31.513410       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:31.513464       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:31.519488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:31.520014       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:31.520057       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:31.521583       1 config.go:200] "Starting service config controller"
	I1016 18:30:31.521607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:31.521594       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:31.521636       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:31.521668       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:31.521673       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:31.521788       1 config.go:309] "Starting node config controller"
	I1016 18:30:31.521814       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:31.521822       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:31.622520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:31.622572       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:30:31.622599       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5] <==
	I1016 18:30:29.219303       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:30:30.386832       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:30:30.386858       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:30.395043       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 18:30:30.395082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 18:30:30.395081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.395088       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.395104       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.395106       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.395603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:30.395968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:30:30.496150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.496178       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.496240       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 18:30:29 newest-cni-794682 kubelet[673]: E1016 18:30:29.984872     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-794682\" not found" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439763     673 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439856     673 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439890     673 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.440673     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.443650     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.455587     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-794682\" already exists" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.455627     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.464154     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-794682\" already exists" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.464190     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.470591     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-794682\" already exists" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.470625     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.477570     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-794682\" already exists" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.939923     673 apiserver.go:52] "Watching apiserver"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.949958     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-xtables-lock\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.950005     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-lib-modules\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.950053     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-cni-cfg\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.987098     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.993461     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-794682\" already exists" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.044530     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.050493     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-xtables-lock\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.050533     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-lib-modules\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794682 -n newest-cni-794682: exit status 2 (325.248037ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-794682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx: exit status 1 (61.96544ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7k82h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fmcgb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pl2sx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-794682
helpers_test.go:243: (dbg) docker inspect newest-cni-794682:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	        "Created": "2025-10-16T18:29:53.821117165Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 270966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:21.430933296Z",
	            "FinishedAt": "2025-10-16T18:30:20.163725788Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hostname",
	        "HostsPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/hosts",
	        "LogPath": "/var/lib/docker/containers/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173/c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173-json.log",
	        "Name": "/newest-cni-794682",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-794682:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-794682",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c5fcc05061109e1f020ed25374a9b4500cafe0e1c909b7ba8085cfe342337173",
	                "LowerDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c7b8e24a1f9d7fba0e516e0f5cbd09bd62316d6698df3d8c1cda2d0d3d6d0153/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-794682",
	                "Source": "/var/lib/docker/volumes/newest-cni-794682/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-794682",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-794682",
	                "name.minikube.sigs.k8s.io": "newest-cni-794682",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "510d58cac0483c173902bce591bb392c1413dc5154b1ce279d2602c251fc7349",
	            "SandboxKey": "/var/run/docker/netns/510d58cac048",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-794682": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:f8:9b:c8:dd:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e00e8380442887174d300c66955f01f91b4ede1590a4ed3c23c8634e39c04bf",
	                    "EndpointID": "7648fcd26d0326e4afefa03e9e840318defd1d81f5c0304110282681506fb368",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-794682",
	                        "c5fcc0506110"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682: exit status 2 (312.554917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-794682 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-956814 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │                     │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ delete  │ -p old-k8s-version-956814                                                                                                                                                                                                                     │ old-k8s-version-956814       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:28 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:28 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p cert-expiration-489554                                                                                                                                                                                                                     │ cert-expiration-489554       │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p disable-driver-mounts-246527                                                                                                                                                                                                               │ disable-driver-mounts-246527 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p newest-cni-794682 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-523257 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ image   │ newest-cni-794682 image list --format=json                                                                                                                                                                                                    │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p newest-cni-794682 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:21.152584  270736 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:21.152931  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.152943  270736 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:21.152949  270736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:21.153283  270736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:21.153985  270736 out.go:368] Setting JSON to false
	I1016 18:30:21.155547  270736 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4369,"bootTime":1760635052,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:30:21.155661  270736 start.go:141] virtualization: kvm guest
	I1016 18:30:21.159830  270736 out.go:179] * [newest-cni-794682] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:30:21.164009  270736 notify.go:220] Checking for updates...
	I1016 18:30:21.164046  270736 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:30:21.166051  270736 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:21.167545  270736 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:21.168937  270736 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:30:21.170373  270736 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:30:21.172157  270736 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:30:21.174450  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:21.175151  270736 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:21.206447  270736 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:30:21.206560  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.275675  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.263342732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.275800  270736 docker.go:318] overlay module found
	I1016 18:30:21.280927  270736 out.go:179] * Using the docker driver based on existing profile
	I1016 18:30:21.282522  270736 start.go:305] selected driver: docker
	I1016 18:30:21.282543  270736 start.go:925] validating driver "docker" against &{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.282643  270736 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:30:21.283370  270736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:21.345288  270736 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-16 18:30:21.334490708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:21.345566  270736 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:21.345589  270736 cni.go:84] Creating CNI manager for ""
	I1016 18:30:21.345634  270736 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:21.345666  270736 start.go:349] cluster config:
	{Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:21.347912  270736 out.go:179] * Starting "newest-cni-794682" primary control-plane node in "newest-cni-794682" cluster
	I1016 18:30:21.349743  270736 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:21.351215  270736 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:30:21.352482  270736 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:21.352534  270736 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:30:21.352557  270736 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:21.352605  270736 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:21.352638  270736 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:30:21.352646  270736 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:30:21.352759  270736 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/config.json ...
	I1016 18:30:21.377574  270736 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:30:21.377594  270736 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:30:21.377609  270736 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:30:21.377632  270736 start.go:360] acquireMachinesLock for newest-cni-794682: {Name:mkc6c572380046cef9b391cb88c87708b2d5d19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:30:21.377760  270736 start.go:364] duration metric: took 81.158µs to acquireMachinesLock for "newest-cni-794682"
	I1016 18:30:21.377786  270736 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:30:21.377793  270736 fix.go:54] fixHost starting: 
	I1016 18:30:21.378064  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:21.399419  270736 fix.go:112] recreateIfNeeded on newest-cni-794682: state=Stopped err=<nil>
	W1016 18:30:21.399455  270736 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 18:30:19.263386  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:21.264000  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:19.444319  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:19.444740  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:19.444797  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:19.444857  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:19.484474  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:19.484498  228782 cri.go:89] found id: ""
	I1016 18:30:19.484508  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:19.484567  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.489338  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:19.489416  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:19.526406  228782 cri.go:89] found id: ""
	I1016 18:30:19.526494  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.526510  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:19.526518  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:19.526576  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:19.565282  228782 cri.go:89] found id: ""
	I1016 18:30:19.565310  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.565321  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:19.565329  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:19.565389  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:19.602445  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:19.602470  228782 cri.go:89] found id: ""
	I1016 18:30:19.602479  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:19.602535  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.607731  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:19.607800  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:19.642926  228782 cri.go:89] found id: ""
	I1016 18:30:19.642964  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.642975  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:19.642982  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:19.643045  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:19.677132  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.677156  228782 cri.go:89] found id: ""
	I1016 18:30:19.677165  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:19.677224  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:19.682584  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:19.682648  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:19.712396  228782 cri.go:89] found id: ""
	I1016 18:30:19.712423  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.712434  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:19.712442  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:19.712492  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:19.741255  228782 cri.go:89] found id: ""
	I1016 18:30:19.741281  228782 logs.go:282] 0 containers: []
	W1016 18:30:19.741292  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:19.741302  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:19.741317  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:19.776253  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:19.776280  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:19.837597  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:19.837630  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:19.871309  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:19.871335  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:19.963099  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:19.963135  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:19.982694  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:19.982755  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:20.068957  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:20.068989  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:20.069004  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:20.117950  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:20.117985  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.697499  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:22.697921  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:22.697978  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:22.698033  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:22.727992  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:22.728011  228782 cri.go:89] found id: ""
	I1016 18:30:22.728019  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:22.728087  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.732224  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:22.732294  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:22.761154  228782 cri.go:89] found id: ""
	I1016 18:30:22.761181  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.761191  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:22.761199  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:22.761277  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:22.790807  228782 cri.go:89] found id: ""
	I1016 18:30:22.790834  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.790844  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:22.790852  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:22.790910  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:22.818463  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:22.818482  228782 cri.go:89] found id: ""
	I1016 18:30:22.818489  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:22.818543  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.822771  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:22.822842  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:22.850980  228782 cri.go:89] found id: ""
	I1016 18:30:22.851008  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.851016  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:22.851025  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:22.851081  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:22.879807  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:22.879829  228782 cri.go:89] found id: ""
	I1016 18:30:22.879837  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:22.879891  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:22.884035  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:22.884101  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:22.910938  228782 cri.go:89] found id: ""
	I1016 18:30:22.910962  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.910971  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:22.910978  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:22.911037  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:22.938572  228782 cri.go:89] found id: ""
	I1016 18:30:22.938600  228782 logs.go:282] 0 containers: []
	W1016 18:30:22.938610  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:22.938621  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:22.938637  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:23.029415  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:23.029447  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:23.044905  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:23.044932  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:23.102676  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:23.102698  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:23.102710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:23.135474  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:23.135510  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:23.188617  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:23.188651  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:23.217050  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:23.217083  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:23.274650  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:23.274680  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:21.400992  270736 out.go:252] * Restarting existing docker container for "newest-cni-794682" ...
	I1016 18:30:21.401102  270736 cli_runner.go:164] Run: docker start newest-cni-794682
	I1016 18:30:21.673444  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:21.696413  270736 kic.go:430] container "newest-cni-794682" state is running.
	I1016 18:30:21.696902  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:21.721293  270736 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/config.json ...
	I1016 18:30:21.721489  270736 machine.go:93] provisionDockerMachine start ...
	I1016 18:30:21.721565  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:21.740696  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:21.740972  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:21.740992  270736 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:30:21.741729  270736 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47254->127.0.0.1:33093: read: connection reset by peer
	I1016 18:30:24.883031  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-794682
	
	I1016 18:30:24.883059  270736 ubuntu.go:182] provisioning hostname "newest-cni-794682"
	I1016 18:30:24.883118  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:24.901950  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:24.902179  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:24.902194  270736 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-794682 && echo "newest-cni-794682" | sudo tee /etc/hostname
	I1016 18:30:25.052844  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-794682
	
	I1016 18:30:25.052908  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.074063  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:25.074361  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:25.074392  270736 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-794682' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-794682/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-794682' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:30:25.213044  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:30:25.213082  270736 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:30:25.213108  270736 ubuntu.go:190] setting up certificates
	I1016 18:30:25.213121  270736 provision.go:84] configureAuth start
	I1016 18:30:25.213178  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:25.233313  270736 provision.go:143] copyHostCerts
	I1016 18:30:25.233382  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:30:25.233398  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:30:25.233499  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:30:25.233659  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:30:25.233673  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:30:25.233738  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:30:25.233826  270736 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:30:25.233838  270736 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:30:25.233878  270736 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:30:25.233953  270736 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.newest-cni-794682 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-794682]
	I1016 18:30:25.522191  270736 provision.go:177] copyRemoteCerts
	I1016 18:30:25.522257  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:30:25.522295  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.540519  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:25.640499  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:30:25.659964  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:30:25.679636  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1016 18:30:25.699356  270736 provision.go:87] duration metric: took 486.222013ms to configureAuth
	I1016 18:30:25.699382  270736 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:30:25.699870  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:25.699984  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:25.720440  270736 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:25.720675  270736 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1016 18:30:25.720701  270736 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:30:26.017237  270736 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:30:26.017263  270736 machine.go:96] duration metric: took 4.29574996s to provisionDockerMachine
	I1016 18:30:26.017277  270736 start.go:293] postStartSetup for "newest-cni-794682" (driver="docker")
	I1016 18:30:26.017292  270736 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:30:26.017364  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:30:26.017414  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.042542  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.149164  270736 ssh_runner.go:195] Run: cat /etc/os-release
	W1016 18:30:23.763146  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:25.763423  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:26.153748  270736 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:30:26.153786  270736 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:30:26.153799  270736 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:30:26.153869  270736 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:30:26.153979  270736 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:30:26.154102  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:30:26.163759  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:26.186409  270736 start.go:296] duration metric: took 169.116529ms for postStartSetup
	I1016 18:30:26.186494  270736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:30:26.186541  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.207689  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.307661  270736 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:30:26.312689  270736 fix.go:56] duration metric: took 4.934886255s for fixHost
	I1016 18:30:26.312753  270736 start.go:83] releasing machines lock for "newest-cni-794682", held for 4.93497707s
	I1016 18:30:26.312821  270736 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-794682
	I1016 18:30:26.332943  270736 ssh_runner.go:195] Run: cat /version.json
	I1016 18:30:26.332992  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.333061  270736 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:30:26.333152  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:26.354193  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.356108  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:26.519378  270736 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:26.527614  270736 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:30:26.568056  270736 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:30:26.573566  270736 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:30:26.573632  270736 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:30:26.583220  270736 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:30:26.583241  270736 start.go:495] detecting cgroup driver to use...
	I1016 18:30:26.583271  270736 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:30:26.583310  270736 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:30:26.599098  270736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:30:26.613675  270736 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:30:26.613747  270736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:30:26.633808  270736 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:30:26.649365  270736 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:30:26.743304  270736 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:30:26.846320  270736 docker.go:234] disabling docker service ...
	I1016 18:30:26.846382  270736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:30:26.864234  270736 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:30:26.879821  270736 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:30:26.968850  270736 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:30:27.063530  270736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:30:27.079189  270736 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:30:27.094469  270736 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:30:27.094530  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.104027  270736 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:30:27.104085  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.114598  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.125585  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.135531  270736 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:30:27.144618  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.155075  270736 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.165398  270736 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:27.176126  270736 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:30:27.185564  270736 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:30:27.194613  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:27.290696  270736 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:30:27.424850  270736 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:30:27.424905  270736 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:30:27.429202  270736 start.go:563] Will wait 60s for crictl version
	I1016 18:30:27.429261  270736 ssh_runner.go:195] Run: which crictl
	I1016 18:30:27.433167  270736 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:30:27.460841  270736 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:30:27.460932  270736 ssh_runner.go:195] Run: crio --version
	I1016 18:30:27.498214  270736 ssh_runner.go:195] Run: crio --version
	I1016 18:30:27.536523  270736 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:30:27.537910  270736 cli_runner.go:164] Run: docker network inspect newest-cni-794682 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:27.557578  270736 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1016 18:30:27.562079  270736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:27.576035  270736 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1016 18:30:27.577664  270736 kubeadm.go:883] updating cluster {Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:30:27.577866  270736 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:27.578007  270736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:27.611936  270736 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:27.611954  270736 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:30:27.611998  270736 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:27.639039  270736 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:27.639058  270736 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:30:27.639066  270736 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1016 18:30:27.639162  270736 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-794682 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:30:27.639226  270736 ssh_runner.go:195] Run: crio config
	I1016 18:30:27.688192  270736 cni.go:84] Creating CNI manager for ""
	I1016 18:30:27.688217  270736 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:27.688234  270736 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1016 18:30:27.688254  270736 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-794682 NodeName:newest-cni-794682 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:30:27.688381  270736 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-794682"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:30:27.688439  270736 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:30:27.697175  270736 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:30:27.697242  270736 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:30:27.705450  270736 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1016 18:30:27.720001  270736 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:30:27.733348  270736 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1016 18:30:27.746820  270736 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:30:27.750957  270736 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:27.762308  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:27.846237  270736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:27.872347  270736 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682 for IP: 192.168.94.2
	I1016 18:30:27.872368  270736 certs.go:195] generating shared ca certs ...
	I1016 18:30:27.872386  270736 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:27.872522  270736 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:30:27.872562  270736 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:30:27.872574  270736 certs.go:257] generating profile certs ...
	I1016 18:30:27.872668  270736 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/client.key
	I1016 18:30:27.872750  270736 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.key.fc2f255e
	I1016 18:30:27.872800  270736 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.key
	I1016 18:30:27.872901  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:30:27.872928  270736 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:30:27.872937  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:30:27.872958  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:30:27.872980  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:30:27.872999  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:30:27.873036  270736 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:27.873698  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:30:27.893567  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:30:27.914471  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:30:27.934468  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:30:27.958051  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1016 18:30:27.978137  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:30:27.996431  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:30:28.014695  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/newest-cni-794682/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:30:28.032308  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:30:28.050340  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:30:28.069115  270736 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:30:28.088174  270736 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:30:28.100964  270736 ssh_runner.go:195] Run: openssl version
	I1016 18:30:28.107054  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:30:28.115573  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.119475  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.119531  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:28.155305  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:30:28.163879  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:30:28.172335  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.176254  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.176317  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:30:28.211644  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:30:28.220360  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:30:28.229458  270736 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.233520  270736 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.233583  270736 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:30:28.270999  270736 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:30:28.279676  270736 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:30:28.283817  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:30:28.318660  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:30:28.353995  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:30:28.393326  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:30:28.436750  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:30:28.480217  270736 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:30:28.534156  270736 kubeadm.go:400] StartCluster: {Name:newest-cni-794682 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-794682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:28.534328  270736 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:30:28.534392  270736 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:30:28.579409  270736 cri.go:89] found id: "ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf"
	I1016 18:30:28.579431  270736 cri.go:89] found id: "91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14"
	I1016 18:30:28.579437  270736 cri.go:89] found id: "e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5"
	I1016 18:30:28.579441  270736 cri.go:89] found id: "91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861"
	I1016 18:30:28.579445  270736 cri.go:89] found id: ""
	I1016 18:30:28.579484  270736 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:30:28.596443  270736 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:28Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:28.596510  270736 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:30:28.607878  270736 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:30:28.607899  270736 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:30:28.607946  270736 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:30:28.619147  270736 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:30:28.620476  270736 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-794682" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:28.621440  270736 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-794682" cluster setting kubeconfig missing "newest-cni-794682" context setting]
	I1016 18:30:28.622735  270736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.624688  270736 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:30:28.635776  270736 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1016 18:30:28.635813  270736 kubeadm.go:601] duration metric: took 27.9081ms to restartPrimaryControlPlane
	I1016 18:30:28.635826  270736 kubeadm.go:402] duration metric: took 101.678552ms to StartCluster
	I1016 18:30:28.635846  270736 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.635915  270736 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:28.638086  270736 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:28.638402  270736 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:30:28.638633  270736 config.go:182] Loaded profile config "newest-cni-794682": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:28.638678  270736 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:30:28.638783  270736 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-794682"
	I1016 18:30:28.638806  270736 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-794682"
	W1016 18:30:28.638815  270736 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:30:28.638846  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.638898  270736 addons.go:69] Setting dashboard=true in profile "newest-cni-794682"
	I1016 18:30:28.638925  270736 addons.go:69] Setting default-storageclass=true in profile "newest-cni-794682"
	I1016 18:30:28.638947  270736 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-794682"
	I1016 18:30:28.638968  270736 addons.go:238] Setting addon dashboard=true in "newest-cni-794682"
	W1016 18:30:28.638979  270736 addons.go:247] addon dashboard should already be in state true
	I1016 18:30:28.639007  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.639342  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.639432  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.639534  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.641605  270736 out.go:179] * Verifying Kubernetes components...
	I1016 18:30:28.642980  270736 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:28.670617  270736 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:30:28.670684  270736 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 18:30:28.670985  270736 addons.go:238] Setting addon default-storageclass=true in "newest-cni-794682"
	W1016 18:30:28.671005  270736 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:30:28.671032  270736 host.go:66] Checking if "newest-cni-794682" exists ...
	I1016 18:30:28.672096  270736 cli_runner.go:164] Run: docker container inspect newest-cni-794682 --format={{.State.Status}}
	I1016 18:30:28.672761  270736 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:28.672823  270736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:30:28.672896  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.674592  270736 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 18:30:25.806872  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:25.807310  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:25.807362  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:25.807421  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:25.839834  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:25.839861  228782 cri.go:89] found id: ""
	I1016 18:30:25.839871  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:25.839931  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:25.844258  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:25.844335  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:25.874569  228782 cri.go:89] found id: ""
	I1016 18:30:25.874597  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.874608  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:25.874615  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:25.874672  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:25.904988  228782 cri.go:89] found id: ""
	I1016 18:30:25.905010  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.905017  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:25.905023  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:25.905084  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:25.937008  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:25.937042  228782 cri.go:89] found id: ""
	I1016 18:30:25.937053  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:25.937109  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:25.941845  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:25.941911  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:25.971978  228782 cri.go:89] found id: ""
	I1016 18:30:25.972010  228782 logs.go:282] 0 containers: []
	W1016 18:30:25.972025  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:25.972032  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:25.972091  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:26.003366  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:26.003389  228782 cri.go:89] found id: ""
	I1016 18:30:26.003399  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:26.003452  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:26.009005  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:26.009142  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:26.042590  228782 cri.go:89] found id: ""
	I1016 18:30:26.042608  228782 logs.go:282] 0 containers: []
	W1016 18:30:26.042623  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:26.042630  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:26.042677  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:26.072694  228782 cri.go:89] found id: ""
	I1016 18:30:26.072731  228782 logs.go:282] 0 containers: []
	W1016 18:30:26.072741  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:26.072750  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:26.072763  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:26.133910  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:26.133942  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:26.165627  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:26.165651  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:26.235559  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:26.235590  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:26.269670  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:26.269699  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:26.374015  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:26.374044  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:26.390353  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:26.390391  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:26.450507  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:26.450526  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:26.450540  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:28.988780  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:28.989610  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:28.989690  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:28.989768  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:29.026445  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:29.026472  228782 cri.go:89] found id: ""
	I1016 18:30:29.026482  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:29.026540  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.031759  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:29.031846  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:28.676026  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 18:30:28.676043  270736 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 18:30:28.676135  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.712580  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.713119  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.714947  270736 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:28.714967  270736 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:30:28.715029  270736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-794682
	I1016 18:30:28.741836  270736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/newest-cni-794682/id_rsa Username:docker}
	I1016 18:30:28.800526  270736 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:28.822089  270736 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:30:28.822153  270736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:30:28.837899  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:28.839004  270736 api_server.go:72] duration metric: took 200.562784ms to wait for apiserver process to appear ...
	I1016 18:30:28.839051  270736 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:30:28.839071  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:28.839562  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 18:30:28.839584  270736 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 18:30:28.865259  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:28.868423  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 18:30:28.868444  270736 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 18:30:28.891481  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 18:30:28.891605  270736 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 18:30:28.911475  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 18:30:28.911501  270736 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 18:30:28.933606  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 18:30:28.933634  270736 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 18:30:28.951512  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 18:30:28.951545  270736 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 18:30:28.969113  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 18:30:28.969137  270736 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 18:30:28.987817  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 18:30:28.987851  270736 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 18:30:29.003978  270736 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:29.003997  270736 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 18:30:29.021823  270736 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:30.328142  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.328170  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.328184  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.352908  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.352943  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.353040  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.362670  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1016 18:30:30.362701  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1016 18:30:30.839985  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:30.845004  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:30.845033  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:30.882306  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.044369462s)
	I1016 18:30:30.882347  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.017051365s)
	I1016 18:30:30.882474  270736 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.860596066s)
	I1016 18:30:30.884225  270736 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-794682 addons enable metrics-server
	
	I1016 18:30:30.893472  270736 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 18:30:30.894967  270736 addons.go:514] duration metric: took 2.256273765s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1016 18:30:28.263632  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:30.265847  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:31.339909  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:31.344657  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:31.344688  270736 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:31.839332  270736 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:30:31.845868  270736 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:30:31.846793  270736 api_server.go:141] control plane version: v1.34.1
	I1016 18:30:31.846818  270736 api_server.go:131] duration metric: took 3.007759792s to wait for apiserver health ...
	I1016 18:30:31.846826  270736 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:30:31.850233  270736 system_pods.go:59] 8 kube-system pods found
	I1016 18:30:31.850260  270736 system_pods.go:61] "coredns-66bc5c9577-7k82h" [127d26c2-1922-4ad8-b6cb-a86f9aefc431] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:31.850268  270736 system_pods.go:61] "etcd-newest-cni-794682" [3b93c2af-67b5-49b1-a0d8-0222ed51a01f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:30:31.850282  270736 system_pods.go:61] "kindnet-chqrm" [f697f30d-64fa-4695-ae47-0268f2604e30] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1016 18:30:31.850291  270736 system_pods.go:61] "kube-apiserver-newest-cni-794682" [e42f2077-4b39-4426-9f1b-67c3faec9f6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:30:31.850302  270736 system_pods.go:61] "kube-controller-manager-newest-cni-794682" [29288a90-424a-435b-9fe3-1c4e512c032e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:30:31.850312  270736 system_pods.go:61] "kube-proxy-dvbrk" [15fff10c-5233-4292-8a44-6005c5ad3ff1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1016 18:30:31.850316  270736 system_pods.go:61] "kube-scheduler-newest-cni-794682" [4a6ae32c-791f-4592-bf85-5c9d9fba8c17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:30:31.850320  270736 system_pods.go:61] "storage-provisioner" [5d551025-22ed-4596-b776-7f087cb2cd62] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1016 18:30:31.850326  270736 system_pods.go:74] duration metric: took 3.49563ms to wait for pod list to return data ...
	I1016 18:30:31.850336  270736 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:30:31.852686  270736 default_sa.go:45] found service account: "default"
	I1016 18:30:31.852704  270736 default_sa.go:55] duration metric: took 2.363021ms for default service account to be created ...
	I1016 18:30:31.852733  270736 kubeadm.go:586] duration metric: took 3.214277249s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1016 18:30:31.852747  270736 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:30:31.854814  270736 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:30:31.854835  270736 node_conditions.go:123] node cpu capacity is 8
	I1016 18:30:31.854847  270736 node_conditions.go:105] duration metric: took 2.096841ms to run NodePressure ...
	I1016 18:30:31.854858  270736 start.go:241] waiting for startup goroutines ...
	I1016 18:30:31.854867  270736 start.go:246] waiting for cluster config update ...
	I1016 18:30:31.854877  270736 start.go:255] writing updated cluster config ...
	I1016 18:30:31.855149  270736 ssh_runner.go:195] Run: rm -f paused
	I1016 18:30:31.903897  270736 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:30:31.906661  270736 out.go:179] * Done! kubectl is now configured to use "newest-cni-794682" cluster and "default" namespace by default
	I1016 18:30:29.066541  228782 cri.go:89] found id: ""
	I1016 18:30:29.066568  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.066579  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:29.066586  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:29.066639  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:29.101207  228782 cri.go:89] found id: ""
	I1016 18:30:29.101257  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.101267  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:29.101279  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:29.101338  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:29.133959  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:29.133985  228782 cri.go:89] found id: ""
	I1016 18:30:29.133995  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:29.134052  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.138831  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:29.138902  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:29.167271  228782 cri.go:89] found id: ""
	I1016 18:30:29.167301  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.167311  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:29.167318  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:29.167381  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:29.196794  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:29.196816  228782 cri.go:89] found id: ""
	I1016 18:30:29.196826  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:29.196884  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:29.201020  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:29.201089  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:29.233089  228782 cri.go:89] found id: ""
	I1016 18:30:29.233121  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.233131  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:29.233141  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:29.233205  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:29.266067  228782 cri.go:89] found id: ""
	I1016 18:30:29.266095  228782 logs.go:282] 0 containers: []
	W1016 18:30:29.266105  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:29.266114  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:29.266127  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:29.336952  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:29.336988  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:29.371919  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:29.371954  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:29.491901  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:29.491945  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:29.511278  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:29.511312  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:29.579955  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:29.579985  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:29.580003  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:29.621236  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:29.621267  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:29.691968  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:29.692014  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:32.227896  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:32.228293  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:32.228345  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:32.228394  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:32.258694  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:32.258729  228782 cri.go:89] found id: ""
	I1016 18:30:32.258740  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:32.258788  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.263551  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:32.263613  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:32.296401  228782 cri.go:89] found id: ""
	I1016 18:30:32.296425  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.296434  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:32.296442  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:32.296497  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:32.326547  228782 cri.go:89] found id: ""
	I1016 18:30:32.326571  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.326582  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:32.326589  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:32.326644  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:32.356936  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:32.356958  228782 cri.go:89] found id: ""
	I1016 18:30:32.356967  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:32.357019  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.362544  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:32.362635  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:32.395505  228782 cri.go:89] found id: ""
	I1016 18:30:32.395532  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.395543  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:32.395551  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:32.395622  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:32.428706  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:32.428763  228782 cri.go:89] found id: ""
	I1016 18:30:32.428774  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:32.428838  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:32.434349  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:32.434416  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:32.470198  228782 cri.go:89] found id: ""
	I1016 18:30:32.470228  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.470239  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:32.470247  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:32.470301  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:32.504552  228782 cri.go:89] found id: ""
	I1016 18:30:32.504581  228782 logs.go:282] 0 containers: []
	W1016 18:30:32.504591  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:32.504601  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:32.504615  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:32.573672  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:32.573710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:32.612881  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:32.612911  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:32.713049  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:32.713083  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:32.728378  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:32.728404  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:32.791028  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:32.791051  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:32.791067  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:32.833610  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:32.833641  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:32.896053  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:32.896081  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	W1016 18:30:32.763776  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	W1016 18:30:35.264189  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.248675199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.252250782Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5865aaca-34fd-458f-b80a-e56ed6390d50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.252813363Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=722601b9-2140-4082-9997-0d75be1d9803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.253910686Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.254341164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.254517199Z" level=info msg="Ran pod sandbox 8034b8a69a4811885619534c44384fa378509b121712b49b1924a32548398f8c with infra container: kube-system/kindnet-chqrm/POD" id=5865aaca-34fd-458f-b80a-e56ed6390d50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255054612Z" level=info msg="Ran pod sandbox f2920fde5e2f1a6a87cd0079a3708583283cd10755a2b69ef1a6eaade7a33941 with infra container: kube-system/kube-proxy-dvbrk/POD" id=722601b9-2140-4082-9997-0d75be1d9803 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255702357Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=043c7751-ee3c-428a-a1bd-019bd59d5aca name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.255967959Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f541bdf0-81d8-4496-a3db-ec0f018ce2c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.256625876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c7da9519-57fd-46f4-947e-f17f125f382f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.256883205Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=002c5443-d437-47e7-96f0-5573bed9e830 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.257674872Z" level=info msg="Creating container: kube-system/kindnet-chqrm/kindnet-cni" id=daade1f8-6fbf-487d-bf5f-8419e48f467a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.25778474Z" level=info msg="Creating container: kube-system/kube-proxy-dvbrk/kube-proxy" id=462fb99d-008c-4b01-9980-69a118298cd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.257968435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.258046216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263000129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263704883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.263809257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.264334938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.291099066Z" level=info msg="Created container f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05: kube-system/kindnet-chqrm/kindnet-cni" id=daade1f8-6fbf-487d-bf5f-8419e48f467a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.291769057Z" level=info msg="Starting container: f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05" id=78c98d64-fa01-4b0f-a96e-8f942542b463 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.293625661Z" level=info msg="Started container" PID=1044 containerID=f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05 description=kube-system/kindnet-chqrm/kindnet-cni id=78c98d64-fa01-4b0f-a96e-8f942542b463 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8034b8a69a4811885619534c44384fa378509b121712b49b1924a32548398f8c
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.294182891Z" level=info msg="Created container 2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6: kube-system/kube-proxy-dvbrk/kube-proxy" id=462fb99d-008c-4b01-9980-69a118298cd5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.294902966Z" level=info msg="Starting container: 2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6" id=b6a6a56a-32e0-4ff4-9e82-be21d7a65314 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:31 newest-cni-794682 crio[525]: time="2025-10-16T18:30:31.297826472Z" level=info msg="Started container" PID=1045 containerID=2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6 description=kube-system/kube-proxy-dvbrk/kube-proxy id=b6a6a56a-32e0-4ff4-9e82-be21d7a65314 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2920fde5e2f1a6a87cd0079a3708583283cd10755a2b69ef1a6eaade7a33941
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f22bad01be6a3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   8034b8a69a481       kindnet-chqrm                               kube-system
	2626af93c90a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   f2920fde5e2f1       kube-proxy-dvbrk                            kube-system
	ab9094c30ff22       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   b9d339bf057ec       kube-controller-manager-newest-cni-794682   kube-system
	91c42f392b907       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   55a91e8bc84f1       etcd-newest-cni-794682                      kube-system
	e35972d82e9c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   5694790785f1e       kube-scheduler-newest-cni-794682            kube-system
	91494023e5e1b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   447cfbb10e532       kube-apiserver-newest-cni-794682            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-794682
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-794682
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=newest-cni-794682
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_30_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:30:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-794682
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 16 Oct 2025 18:30:30 +0000   Thu, 16 Oct 2025 18:30:04 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-794682
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                815246ac-cdb2-4d78-ba36-a1b7df678ead
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-794682                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-chqrm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-794682             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-794682    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-dvbrk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-794682             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 22s               kube-proxy       
	  Normal  Starting                 6s                kube-proxy       
	  Normal  Starting                 29s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s               kubelet          Node newest-cni-794682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s               kubelet          Node newest-cni-794682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s               kubelet          Node newest-cni-794682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s               node-controller  Node newest-cni-794682 event: Registered Node newest-cni-794682 in Controller
	  Normal  Starting                 10s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)  kubelet          Node newest-cni-794682 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)  kubelet          Node newest-cni-794682 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 10s)  kubelet          Node newest-cni-794682 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                node-controller  Node newest-cni-794682 event: Registered Node newest-cni-794682 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [91c42f392b9070df7654a66f4ce71b4f085c9f171014a9ee55a5f0bb8c327f14] <==
	{"level":"warn","ts":"2025-10-16T18:30:29.710157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.718168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.728695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.743122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.749923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.756585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.764025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.771870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.778581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.785774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.792527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.799208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.805678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.813396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.826846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.835023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.842257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.849448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.856435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.863276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.870820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.882521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.890306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.896605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:29.941346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49944","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:30:37 up  1:13,  0 user,  load average: 5.13, 3.16, 1.95
	Linux newest-cni-794682 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f22bad01be6a35f79cdfb25800fd0f0d7cd7370345fa9e1932b29677a6bdbb05] <==
	I1016 18:30:31.543532       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:31.543810       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1016 18:30:31.543943       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:31.543963       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:31.543973       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:31.744862       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:31.744928       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:31.744941       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:31.745099       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:32.045228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:32.045254       1 metrics.go:72] Registering metrics
	I1016 18:30:32.045309       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [91494023e5e1bf1f6307bcee4e2d533dfa2cd3d963e37741b3f5ab473d748861] <==
	I1016 18:30:30.420798       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:30.421081       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 18:30:30.421363       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:30:30.421439       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 18:30:30.421409       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:30:30.421850       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:30:30.421868       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:30:30.421875       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:30:30.423900       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:30:30.427687       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1016 18:30:30.429870       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:30.433977       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:30:30.434238       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1016 18:30:30.450509       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:30:30.680631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:30.710511       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:30.730924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:30.739281       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:30.747825       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:30.786641       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.42.49"}
	I1016 18:30:30.798427       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.209.126"}
	I1016 18:30:31.324906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:34.175551       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:30:34.226223       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:34.276471       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ab9094c30ff22e9bfab5eec94732ce5878232de56d4a25020e9e9ad3911f02bf] <==
	I1016 18:30:33.748248       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:30:33.752439       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:30:33.772098       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:30:33.772221       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1016 18:30:33.772243       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:30:33.772260       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:30:33.772638       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:30:33.772398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:30:33.772392       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1016 18:30:33.773113       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1016 18:30:33.773203       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 18:30:33.773528       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 18:30:33.774697       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:30:33.774792       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:33.774808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 18:30:33.774829       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:30:33.776870       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:30:33.777440       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:33.777507       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:33.782602       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:30:33.784960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:33.787451       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:30:33.790672       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:30:33.792966       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:30:33.796368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2626af93c90a36452cc6e8d9a0079d7fc6a8712dbdc80a69341194f4764988b6] <==
	I1016 18:30:31.336399       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:31.390219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:31.490669       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:31.490774       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1016 18:30:31.490882       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:31.513410       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:31.513464       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:31.519488       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:31.520014       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:31.520057       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:31.521583       1 config.go:200] "Starting service config controller"
	I1016 18:30:31.521607       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:31.521594       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:31.521636       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:31.521668       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:31.521673       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:31.521788       1 config.go:309] "Starting node config controller"
	I1016 18:30:31.521814       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:31.521822       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:31.622520       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:31.622572       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1016 18:30:31.622599       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e35972d82e9c22de02eeb267933f4b1af09651a36aa1249da16d297f40f25ec5] <==
	I1016 18:30:29.219303       1 serving.go:386] Generated self-signed cert in-memory
	I1016 18:30:30.386832       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:30:30.386858       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:30.395043       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1016 18:30:30.395082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1016 18:30:30.395081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.395088       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.395104       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.395106       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.395603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:30.395968       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:30:30.496150       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1016 18:30:30.496178       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:30.496240       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 16 18:30:29 newest-cni-794682 kubelet[673]: E1016 18:30:29.984872     673 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-794682\" not found" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439763     673 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439856     673 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.439890     673 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.440673     673 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.443650     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.455587     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-794682\" already exists" pod="kube-system/kube-controller-manager-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.455627     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.464154     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-794682\" already exists" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.464190     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.470591     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-794682\" already exists" pod="kube-system/etcd-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.470625     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.477570     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-794682\" already exists" pod="kube-system/kube-apiserver-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.939923     673 apiserver.go:52] "Watching apiserver"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.949958     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-xtables-lock\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.950005     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-lib-modules\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.950053     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f697f30d-64fa-4695-ae47-0268f2604e30-cni-cfg\") pod \"kindnet-chqrm\" (UID: \"f697f30d-64fa-4695-ae47-0268f2604e30\") " pod="kube-system/kindnet-chqrm"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: I1016 18:30:30.987098     673 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:30 newest-cni-794682 kubelet[673]: E1016 18:30:30.993461     673 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-794682\" already exists" pod="kube-system/kube-scheduler-newest-cni-794682"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.044530     673 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.050493     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-xtables-lock\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:31 newest-cni-794682 kubelet[673]: I1016 18:30:31.050533     673 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15fff10c-5233-4292-8a44-6005c5ad3ff1-lib-modules\") pod \"kube-proxy-dvbrk\" (UID: \"15fff10c-5233-4292-8a44-6005c5ad3ff1\") " pod="kube-system/kube-proxy-dvbrk"
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:30:32 newest-cni-794682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794682 -n newest-cni-794682
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-794682 -n newest-cni-794682: exit status 2 (319.204184ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-794682 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx: exit status 1 (61.55345ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7k82h" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fmcgb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-pl2sx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-794682 describe pod coredns-66bc5c9577-7k82h storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fmcgb kubernetes-dashboard-855c9754f9-pl2sx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-063117 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-063117 --alsologtostderr -v=1: exit status 80 (2.717590122s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-063117 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:30:58.043938  282118 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:58.044080  282118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:58.044094  282118 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:58.044103  282118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:58.044369  282118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:58.044709  282118 out.go:368] Setting JSON to false
	I1016 18:30:58.044770  282118 mustload.go:65] Loading cluster: embed-certs-063117
	I1016 18:30:58.045201  282118 config.go:182] Loaded profile config "embed-certs-063117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:58.045645  282118 cli_runner.go:164] Run: docker container inspect embed-certs-063117 --format={{.State.Status}}
	I1016 18:30:58.072510  282118 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:30:58.072872  282118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:58.161051  282118 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-16 18:30:58.141663023 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:58.161914  282118 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-063117 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:30:58.165300  282118 out.go:179] * Pausing node embed-certs-063117 ... 
	I1016 18:30:58.166903  282118 host.go:66] Checking if "embed-certs-063117" exists ...
	I1016 18:30:58.167212  282118 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:58.167276  282118 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-063117
	I1016 18:30:58.190735  282118 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/embed-certs-063117/id_rsa Username:docker}
	I1016 18:30:58.300551  282118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:58.319189  282118 pause.go:52] kubelet running: true
	I1016 18:30:58.319263  282118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:58.551930  282118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:58.552012  282118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:58.648970  282118 cri.go:89] found id: "f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61"
	I1016 18:30:58.649004  282118 cri.go:89] found id: "ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932"
	I1016 18:30:58.649011  282118 cri.go:89] found id: "8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8"
	I1016 18:30:58.649016  282118 cri.go:89] found id: "580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6"
	I1016 18:30:58.649020  282118 cri.go:89] found id: "86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	I1016 18:30:58.649026  282118 cri.go:89] found id: "3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614"
	I1016 18:30:58.649031  282118 cri.go:89] found id: "121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885"
	I1016 18:30:58.649035  282118 cri.go:89] found id: "06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba"
	I1016 18:30:58.649039  282118 cri.go:89] found id: "2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb"
	I1016 18:30:58.649055  282118 cri.go:89] found id: "6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	I1016 18:30:58.649060  282118 cri.go:89] found id: "44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a"
	I1016 18:30:58.649064  282118 cri.go:89] found id: ""
	I1016 18:30:58.649117  282118 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:58.664788  282118 retry.go:31] will retry after 259.148967ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:58Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:58.924211  282118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:58.943012  282118 pause.go:52] kubelet running: false
	I1016 18:30:58.943084  282118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:59.140796  282118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:59.140897  282118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:59.224601  282118 cri.go:89] found id: "f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61"
	I1016 18:30:59.224624  282118 cri.go:89] found id: "ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932"
	I1016 18:30:59.224630  282118 cri.go:89] found id: "8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8"
	I1016 18:30:59.224635  282118 cri.go:89] found id: "580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6"
	I1016 18:30:59.224639  282118 cri.go:89] found id: "86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	I1016 18:30:59.224645  282118 cri.go:89] found id: "3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614"
	I1016 18:30:59.224649  282118 cri.go:89] found id: "121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885"
	I1016 18:30:59.224653  282118 cri.go:89] found id: "06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba"
	I1016 18:30:59.224657  282118 cri.go:89] found id: "2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb"
	I1016 18:30:59.224671  282118 cri.go:89] found id: "6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	I1016 18:30:59.224675  282118 cri.go:89] found id: "44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a"
	I1016 18:30:59.224679  282118 cri.go:89] found id: ""
	I1016 18:30:59.224759  282118 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:59.236800  282118 retry.go:31] will retry after 259.226175ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:59Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:59.496259  282118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:59.516344  282118 pause.go:52] kubelet running: false
	I1016 18:30:59.516406  282118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:30:59.701695  282118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:30:59.701881  282118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:30:59.795753  282118 cri.go:89] found id: "f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61"
	I1016 18:30:59.795778  282118 cri.go:89] found id: "ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932"
	I1016 18:30:59.795783  282118 cri.go:89] found id: "8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8"
	I1016 18:30:59.795788  282118 cri.go:89] found id: "580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6"
	I1016 18:30:59.795792  282118 cri.go:89] found id: "86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	I1016 18:30:59.795797  282118 cri.go:89] found id: "3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614"
	I1016 18:30:59.795823  282118 cri.go:89] found id: "121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885"
	I1016 18:30:59.795827  282118 cri.go:89] found id: "06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba"
	I1016 18:30:59.795845  282118 cri.go:89] found id: "2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb"
	I1016 18:30:59.795861  282118 cri.go:89] found id: "6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	I1016 18:30:59.795866  282118 cri.go:89] found id: "44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a"
	I1016 18:30:59.795870  282118 cri.go:89] found id: ""
	I1016 18:30:59.796333  282118 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:30:59.814801  282118 retry.go:31] will retry after 460.56871ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:59Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:31:00.276832  282118 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:31:00.297733  282118 pause.go:52] kubelet running: false
	I1016 18:31:00.297799  282118 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:31:00.545097  282118 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:31:00.545265  282118 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:31:00.666303  282118 cri.go:89] found id: "f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61"
	I1016 18:31:00.666331  282118 cri.go:89] found id: "ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932"
	I1016 18:31:00.666336  282118 cri.go:89] found id: "8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8"
	I1016 18:31:00.666341  282118 cri.go:89] found id: "580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6"
	I1016 18:31:00.666345  282118 cri.go:89] found id: "86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	I1016 18:31:00.666349  282118 cri.go:89] found id: "3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614"
	I1016 18:31:00.666353  282118 cri.go:89] found id: "121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885"
	I1016 18:31:00.666358  282118 cri.go:89] found id: "06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba"
	I1016 18:31:00.666362  282118 cri.go:89] found id: "2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb"
	I1016 18:31:00.666369  282118 cri.go:89] found id: "6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	I1016 18:31:00.666373  282118 cri.go:89] found id: "44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a"
	I1016 18:31:00.666377  282118 cri.go:89] found id: ""
	I1016 18:31:00.666435  282118 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:31:00.690288  282118 out.go:203] 
	W1016 18:31:00.691618  282118 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:31:00.691637  282118 out.go:285] * 
	* 
	W1016 18:31:00.698497  282118 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:31:00.701146  282118 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-063117 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-063117
helpers_test.go:243: (dbg) docker inspect embed-certs-063117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	        "Created": "2025-10-16T18:28:54.918690306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265713,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:02.078894139Z",
	            "FinishedAt": "2025-10-16T18:30:01.196271488Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hosts",
	        "LogPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1-json.log",
	        "Name": "/embed-certs-063117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-063117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-063117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	                "LowerDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-063117",
	                "Source": "/var/lib/docker/volumes/embed-certs-063117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-063117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-063117",
	                "name.minikube.sigs.k8s.io": "embed-certs-063117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e58640c535d3b64ab12de6be448f4c02c1b8a8b8f550185407e51a3227d8b5d0",
	            "SandboxKey": "/var/run/docker/netns/e58640c535d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-063117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:60:3b:7f:ff:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d58ff291817e0d805fb2a74d398badc9c07572e1fefc22609c9ab31d677b2e36",
	                    "EndpointID": "14001818e798857b9d949e230b6d558100606593d367fd2a2e960ba374dda3ce",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-063117",
	                        "1fe6653a430a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117: exit status 2 (432.568764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25: (1.606437707s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p newest-cni-794682 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-523257 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ newest-cni-794682 image list --format=json                                                                                                                                                                                                    │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p newest-cni-794682 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ delete  │ -p newest-cni-794682                                                                                                                                                                                                                          │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p newest-cni-794682                                                                                                                                                                                                                          │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p auto-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-523257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ image   │ embed-certs-063117 image list --format=json                                                                                                                                                                                                   │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p embed-certs-063117 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:30:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:30:45.894705  277941 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:30:45.894964  277941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:45.894972  277941 out.go:374] Setting ErrFile to fd 2...
	I1016 18:30:45.894976  277941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:30:45.895190  277941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:30:45.895604  277941 out.go:368] Setting JSON to false
	I1016 18:30:45.896657  277941 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4394,"bootTime":1760635052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:30:45.896769  277941 start.go:141] virtualization: kvm guest
	I1016 18:30:45.898518  277941 out.go:179] * [default-k8s-diff-port-523257] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:30:45.900123  277941 notify.go:220] Checking for updates...
	I1016 18:30:45.900150  277941 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:30:45.901328  277941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:30:45.902686  277941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:45.903898  277941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:30:45.904956  277941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:30:45.906026  277941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:30:45.907646  277941 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:45.908146  277941 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:30:45.941292  277941 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:30:45.941434  277941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:46.000246  277941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-16 18:30:45.990117125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:46.000397  277941 docker.go:318] overlay module found
	I1016 18:30:46.002305  277941 out.go:179] * Using the docker driver based on existing profile
	I1016 18:30:46.003506  277941 start.go:305] selected driver: docker
	I1016 18:30:46.003528  277941 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:46.003648  277941 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:30:46.004390  277941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:30:46.063060  277941 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-16 18:30:46.053620177 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:30:46.063356  277941 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:30:46.063387  277941 cni.go:84] Creating CNI manager for ""
	I1016 18:30:46.063432  277941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:46.063462  277941 start.go:349] cluster config:
	{Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:46.065491  277941 out.go:179] * Starting "default-k8s-diff-port-523257" primary control-plane node in "default-k8s-diff-port-523257" cluster
	I1016 18:30:46.066811  277941 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:30:46.068098  277941 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:30:41.303236  276879 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:30:41.303468  276879 start.go:159] libmachine.API.Create for "auto-084411" (driver="docker")
	I1016 18:30:41.303500  276879 client.go:168] LocalClient.Create starting
	I1016 18:30:41.303588  276879 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:30:41.303652  276879 main.go:141] libmachine: Decoding PEM data...
	I1016 18:30:41.303681  276879 main.go:141] libmachine: Parsing certificate...
	I1016 18:30:41.303764  276879 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:30:41.303795  276879 main.go:141] libmachine: Decoding PEM data...
	I1016 18:30:41.303812  276879 main.go:141] libmachine: Parsing certificate...
	I1016 18:30:41.304221  276879 cli_runner.go:164] Run: docker network inspect auto-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:30:41.322030  276879 cli_runner.go:211] docker network inspect auto-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:30:41.322127  276879 network_create.go:284] running [docker network inspect auto-084411] to gather additional debugging logs...
	I1016 18:30:41.322152  276879 cli_runner.go:164] Run: docker network inspect auto-084411
	W1016 18:30:41.339995  276879 cli_runner.go:211] docker network inspect auto-084411 returned with exit code 1
	I1016 18:30:41.340026  276879 network_create.go:287] error running [docker network inspect auto-084411]: docker network inspect auto-084411: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-084411 not found
	I1016 18:30:41.340043  276879 network_create.go:289] output of [docker network inspect auto-084411]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-084411 not found
	
	** /stderr **
	I1016 18:30:41.340117  276879 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:41.358003  276879 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:30:41.358844  276879 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:30:41.359596  276879 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:30:41.360136  276879 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a07ac2eb0982 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:2a:d5:21:5c:9c} reservation:<nil>}
	I1016 18:30:41.360745  276879 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-18ba3d114872 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7e:c9:1a:4a:56:57} reservation:<nil>}
	I1016 18:30:41.361570  276879 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc1cb0}
	I1016 18:30:41.361593  276879 network_create.go:124] attempt to create docker network auto-084411 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1016 18:30:41.361661  276879 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-084411 auto-084411
	I1016 18:30:41.420156  276879 network_create.go:108] docker network auto-084411 192.168.94.0/24 created
	I1016 18:30:41.420190  276879 kic.go:121] calculated static IP "192.168.94.2" for the "auto-084411" container
	I1016 18:30:41.420257  276879 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:30:41.438101  276879 cli_runner.go:164] Run: docker volume create auto-084411 --label name.minikube.sigs.k8s.io=auto-084411 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:30:41.457003  276879 oci.go:103] Successfully created a docker volume auto-084411
	I1016 18:30:41.457150  276879 cli_runner.go:164] Run: docker run --rm --name auto-084411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-084411 --entrypoint /usr/bin/test -v auto-084411:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:30:41.867844  276879 oci.go:107] Successfully prepared a docker volume auto-084411
	I1016 18:30:41.867879  276879 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:41.867899  276879 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:30:41.867959  276879 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-084411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	I1016 18:30:45.724659  276879 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-084411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (3.856646719s)
	I1016 18:30:45.724689  276879 kic.go:203] duration metric: took 3.856788578s to extract preloaded images to volume ...
	W1016 18:30:45.724789  276879 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:30:45.724827  276879 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:30:45.724903  276879 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:30:45.786343  276879 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-084411 --name auto-084411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-084411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-084411 --network auto-084411 --ip 192.168.94.2 --volume auto-084411:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 18:30:46.098626  276879 cli_runner.go:164] Run: docker container inspect auto-084411 --format={{.State.Running}}
	I1016 18:30:46.069343  277941 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:46.069384  277941 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:30:46.069394  277941 cache.go:58] Caching tarball of preloaded images
	I1016 18:30:46.069380  277941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:30:46.069479  277941 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:30:46.069493  277941 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:30:46.069585  277941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:30:46.091750  277941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:30:46.091773  277941 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:30:46.091793  277941 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:30:46.091820  277941 start.go:360] acquireMachinesLock for default-k8s-diff-port-523257: {Name:mk0ef672dc84306ea126d15d9b249684df6a69ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:30:46.091888  277941 start.go:364] duration metric: took 46.302µs to acquireMachinesLock for "default-k8s-diff-port-523257"
	I1016 18:30:46.091911  277941 start.go:96] Skipping create...Using existing machine configuration
	I1016 18:30:46.091917  277941 fix.go:54] fixHost starting: 
	I1016 18:30:46.092166  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:46.113390  277941 fix.go:112] recreateIfNeeded on default-k8s-diff-port-523257: state=Stopped err=<nil>
	W1016 18:30:46.113435  277941 fix.go:138] unexpected machine state, will restart: <nil>
	W1016 18:30:43.925802  265507 pod_ready.go:104] pod "coredns-66bc5c9577-v85b5" is not "Ready", error: <nil>
	I1016 18:30:44.898505  265507 pod_ready.go:94] pod "coredns-66bc5c9577-v85b5" is "Ready"
	I1016 18:30:44.898534  265507 pod_ready.go:86] duration metric: took 32.141379474s for pod "coredns-66bc5c9577-v85b5" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.901286  265507 pod_ready.go:83] waiting for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.905790  265507 pod_ready.go:94] pod "etcd-embed-certs-063117" is "Ready"
	I1016 18:30:44.905817  265507 pod_ready.go:86] duration metric: took 4.508952ms for pod "etcd-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.908482  265507 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.913527  265507 pod_ready.go:94] pod "kube-apiserver-embed-certs-063117" is "Ready"
	I1016 18:30:44.913555  265507 pod_ready.go:86] duration metric: took 5.048549ms for pod "kube-apiserver-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.916371  265507 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:44.961560  265507 pod_ready.go:94] pod "kube-controller-manager-embed-certs-063117" is "Ready"
	I1016 18:30:44.961584  265507 pod_ready.go:86] duration metric: took 45.184518ms for pod "kube-controller-manager-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:45.161871  265507 pod_ready.go:83] waiting for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:45.560897  265507 pod_ready.go:94] pod "kube-proxy-rsvq2" is "Ready"
	I1016 18:30:45.560924  265507 pod_ready.go:86] duration metric: took 399.027842ms for pod "kube-proxy-rsvq2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:45.761687  265507 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:46.162021  265507 pod_ready.go:94] pod "kube-scheduler-embed-certs-063117" is "Ready"
	I1016 18:30:46.162051  265507 pod_ready.go:86] duration metric: took 400.342085ms for pod "kube-scheduler-embed-certs-063117" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:30:46.162066  265507 pod_ready.go:40] duration metric: took 33.408619967s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:30:46.221346  265507 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:30:46.223777  265507 out.go:179] * Done! kubectl is now configured to use "embed-certs-063117" cluster and "default" namespace by default
	I1016 18:30:44.931843  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:44.932306  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:44.932369  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:44.932429  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:44.961707  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:44.961750  228782 cri.go:89] found id: ""
	I1016 18:30:44.961760  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:44.961823  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:44.965989  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:44.966065  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:44.994847  228782 cri.go:89] found id: ""
	I1016 18:30:44.994875  228782 logs.go:282] 0 containers: []
	W1016 18:30:44.994885  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:44.994893  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:44.994950  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:45.027145  228782 cri.go:89] found id: ""
	I1016 18:30:45.027174  228782 logs.go:282] 0 containers: []
	W1016 18:30:45.027185  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:45.027193  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:45.027246  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:45.055523  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:45.055553  228782 cri.go:89] found id: ""
	I1016 18:30:45.055561  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:45.055612  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:45.059932  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:45.059995  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:45.088466  228782 cri.go:89] found id: ""
	I1016 18:30:45.088487  228782 logs.go:282] 0 containers: []
	W1016 18:30:45.088495  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:45.088501  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:45.088546  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:45.115777  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:45.115803  228782 cri.go:89] found id: ""
	I1016 18:30:45.115811  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:45.115858  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:45.120113  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:45.120190  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:45.147975  228782 cri.go:89] found id: ""
	I1016 18:30:45.147998  228782 logs.go:282] 0 containers: []
	W1016 18:30:45.148006  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:45.148011  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:45.148060  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:45.176559  228782 cri.go:89] found id: ""
	I1016 18:30:45.176584  228782 logs.go:282] 0 containers: []
	W1016 18:30:45.176591  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:45.176600  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:45.176611  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:45.234198  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:45.234225  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:45.234246  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:45.266433  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:45.266461  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:45.323153  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:45.323193  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:45.350690  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:45.350732  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:45.411978  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:45.412016  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:45.445377  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:45.445407  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:45.543035  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:45.543072  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:48.060350  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:48.060947  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:48.061008  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:48.061111  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:48.088338  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:48.088366  228782 cri.go:89] found id: ""
	I1016 18:30:48.088406  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:48.088502  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:48.092698  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:48.092784  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:48.121226  228782 cri.go:89] found id: ""
	I1016 18:30:48.121260  228782 logs.go:282] 0 containers: []
	W1016 18:30:48.121272  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:48.121279  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:48.121348  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:48.148825  228782 cri.go:89] found id: ""
	I1016 18:30:48.148853  228782 logs.go:282] 0 containers: []
	W1016 18:30:48.148862  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:48.148869  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:48.148925  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:48.177827  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:48.177853  228782 cri.go:89] found id: ""
	I1016 18:30:48.177863  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:48.177922  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:48.182213  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:48.182269  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:48.209632  228782 cri.go:89] found id: ""
	I1016 18:30:48.209659  228782 logs.go:282] 0 containers: []
	W1016 18:30:48.209667  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:48.209672  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:48.209735  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:48.236989  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:48.237013  228782 cri.go:89] found id: ""
	I1016 18:30:48.237020  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:48.237066  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:48.241542  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:48.241611  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:48.269694  228782 cri.go:89] found id: ""
	I1016 18:30:48.269735  228782 logs.go:282] 0 containers: []
	W1016 18:30:48.269747  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:48.269754  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:48.269809  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:48.297704  228782 cri.go:89] found id: ""
	I1016 18:30:48.297749  228782 logs.go:282] 0 containers: []
	W1016 18:30:48.297761  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:48.297772  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:48.297787  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:48.329451  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:48.329477  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:48.420052  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:48.420086  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:48.435071  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:48.435104  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:48.494885  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:48.494907  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:48.494918  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:48.526378  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:48.526413  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:48.583183  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:48.583219  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:48.610786  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:48.610829  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:46.115449  277941 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-523257" ...
	I1016 18:30:46.115528  277941 cli_runner.go:164] Run: docker start default-k8s-diff-port-523257
	I1016 18:30:46.425909  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:46.447406  277941 kic.go:430] container "default-k8s-diff-port-523257" state is running.
	I1016 18:30:46.447917  277941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:30:46.476200  277941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/config.json ...
	I1016 18:30:46.476485  277941 machine.go:93] provisionDockerMachine start ...
	I1016 18:30:46.476550  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:46.502709  277941 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:46.503064  277941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1016 18:30:46.503076  277941 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:30:46.503969  277941 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42648->127.0.0.1:33103: read: connection reset by peer
	I1016 18:30:49.647142  277941 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:30:49.647198  277941 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523257"
	I1016 18:30:49.647307  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:49.667161  277941 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:49.667403  277941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1016 18:30:49.667422  277941 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523257 && echo "default-k8s-diff-port-523257" | sudo tee /etc/hostname
	I1016 18:30:49.817439  277941 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523257
	
	I1016 18:30:49.817517  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:49.837493  277941 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:49.837755  277941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1016 18:30:49.837780  277941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523257' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523257/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523257' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:30:49.979264  277941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:30:49.979299  277941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:30:49.979324  277941 ubuntu.go:190] setting up certificates
	I1016 18:30:49.979338  277941 provision.go:84] configureAuth start
	I1016 18:30:49.979428  277941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:30:49.998749  277941 provision.go:143] copyHostCerts
	I1016 18:30:49.998806  277941 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:30:49.998825  277941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:30:49.998891  277941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:30:49.999032  277941 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:30:49.999048  277941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:30:49.999086  277941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:30:49.999176  277941 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:30:49.999187  277941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:30:49.999218  277941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:30:49.999288  277941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523257 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-523257 localhost minikube]
	I1016 18:30:50.588267  277941 provision.go:177] copyRemoteCerts
	I1016 18:30:50.588332  277941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:30:50.588377  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:50.609304  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:50.712585  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:30:50.732333  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1016 18:30:50.752653  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:30:50.774698  277941 provision.go:87] duration metric: took 795.33544ms to configureAuth
	I1016 18:30:50.774752  277941 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:30:50.774927  277941 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:50.775069  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:50.795951  277941 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:50.796271  277941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1016 18:30:50.796295  277941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:30:46.120613  276879 cli_runner.go:164] Run: docker container inspect auto-084411 --format={{.State.Status}}
	I1016 18:30:46.141162  276879 cli_runner.go:164] Run: docker exec auto-084411 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:30:46.191398  276879 oci.go:144] the created container "auto-084411" has a running status.
	I1016 18:30:46.191432  276879 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa...
	I1016 18:30:46.234941  276879 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:30:46.268318  276879 cli_runner.go:164] Run: docker container inspect auto-084411 --format={{.State.Status}}
	I1016 18:30:46.297567  276879 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:30:46.297682  276879 kic_runner.go:114] Args: [docker exec --privileged auto-084411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:30:46.358944  276879 cli_runner.go:164] Run: docker container inspect auto-084411 --format={{.State.Status}}
	I1016 18:30:46.387410  276879 machine.go:93] provisionDockerMachine start ...
	I1016 18:30:46.387516  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:46.412828  276879 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:46.413177  276879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1016 18:30:46.413208  276879 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:30:46.414015  276879 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57206->127.0.0.1:33098: read: connection reset by peer
	I1016 18:30:49.553044  276879 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-084411
	
	I1016 18:30:49.553072  276879 ubuntu.go:182] provisioning hostname "auto-084411"
	I1016 18:30:49.553139  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:49.573298  276879 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:49.573501  276879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1016 18:30:49.573514  276879 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-084411 && echo "auto-084411" | sudo tee /etc/hostname
	I1016 18:30:49.720957  276879 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-084411
	
	I1016 18:30:49.721040  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:49.740301  276879 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:49.740514  276879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1016 18:30:49.740530  276879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-084411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-084411/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-084411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:30:49.879871  276879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:30:49.879915  276879 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:30:49.879937  276879 ubuntu.go:190] setting up certificates
	I1016 18:30:49.879946  276879 provision.go:84] configureAuth start
	I1016 18:30:49.880033  276879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-084411
	I1016 18:30:49.900303  276879 provision.go:143] copyHostCerts
	I1016 18:30:49.900367  276879 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:30:49.900380  276879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:30:49.900451  276879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:30:49.900533  276879 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:30:49.900541  276879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:30:49.900571  276879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:30:49.900638  276879 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:30:49.900650  276879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:30:49.900684  276879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:30:49.900781  276879 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.auto-084411 san=[127.0.0.1 192.168.94.2 auto-084411 localhost minikube]
	I1016 18:30:49.966171  276879 provision.go:177] copyRemoteCerts
	I1016 18:30:49.966235  276879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:30:49.966276  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:49.986705  276879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa Username:docker}
	I1016 18:30:50.086801  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1016 18:30:50.110385  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:30:50.129996  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1016 18:30:50.148756  276879 provision.go:87] duration metric: took 268.795696ms to configureAuth
	I1016 18:30:50.148783  276879 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:30:50.148929  276879 config.go:182] Loaded profile config "auto-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:50.149019  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:50.168917  276879 main.go:141] libmachine: Using SSH client type: native
	I1016 18:30:50.169222  276879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1016 18:30:50.169251  276879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:30:50.429905  276879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:30:50.429930  276879 machine.go:96] duration metric: took 4.04248998s to provisionDockerMachine
	I1016 18:30:50.429942  276879 client.go:171] duration metric: took 9.126434117s to LocalClient.Create
	I1016 18:30:50.429967  276879 start.go:167] duration metric: took 9.126499601s to libmachine.API.Create "auto-084411"
	I1016 18:30:50.429990  276879 start.go:293] postStartSetup for "auto-084411" (driver="docker")
	I1016 18:30:50.430008  276879 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:30:50.430073  276879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:30:50.430107  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:50.452307  276879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa Username:docker}
	I1016 18:30:50.554029  276879 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:30:50.557849  276879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:30:50.557874  276879 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:30:50.557884  276879 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:30:50.557933  276879 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:30:50.558002  276879 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:30:50.558102  276879 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:30:50.566799  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:50.589568  276879 start.go:296] duration metric: took 159.563931ms for postStartSetup
	I1016 18:30:50.590022  276879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-084411
	I1016 18:30:50.609777  276879 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/config.json ...
	I1016 18:30:50.610131  276879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:30:50.610185  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:50.630531  276879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa Username:docker}
	I1016 18:30:50.728995  276879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:30:50.734488  276879 start.go:128] duration metric: took 9.433280138s to createHost
	I1016 18:30:50.734516  276879 start.go:83] releasing machines lock for "auto-084411", held for 9.433420151s
	I1016 18:30:50.734589  276879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-084411
	I1016 18:30:50.754510  276879 ssh_runner.go:195] Run: cat /version.json
	I1016 18:30:50.754540  276879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:30:50.754574  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:50.754583  276879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-084411
	I1016 18:30:50.776144  276879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa Username:docker}
	I1016 18:30:50.777193  276879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/auto-084411/id_rsa Username:docker}
	I1016 18:30:50.874628  276879 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:50.930783  276879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:30:50.969621  276879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:30:50.975242  276879 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:30:50.975310  276879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:30:51.003664  276879 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:30:51.003689  276879 start.go:495] detecting cgroup driver to use...
	I1016 18:30:51.003744  276879 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:30:51.003797  276879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:30:51.021427  276879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:30:51.035200  276879 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:30:51.035272  276879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:30:51.054019  276879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:30:51.073691  276879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:30:51.166882  276879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:30:51.265711  276879 docker.go:234] disabling docker service ...
	I1016 18:30:51.265807  276879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:30:51.289871  276879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:30:51.306170  276879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:30:51.403474  276879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:30:51.506497  276879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:30:51.521840  276879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:30:51.538294  276879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:30:51.538363  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.551404  276879 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:30:51.551457  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.562453  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.573458  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.583299  276879 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:30:51.592057  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.602126  276879 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.618573  276879 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:51.629909  276879 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:30:51.641425  276879 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:30:51.652017  276879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:51.748783  276879 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:30:51.882210  276879 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:30:51.882312  276879 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:30:51.887007  276879 start.go:563] Will wait 60s for crictl version
	I1016 18:30:51.887086  276879 ssh_runner.go:195] Run: which crictl
	I1016 18:30:51.891350  276879 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:30:51.916769  276879 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:30:51.916841  276879 ssh_runner.go:195] Run: crio --version
	I1016 18:30:51.947106  276879 ssh_runner.go:195] Run: crio --version
	I1016 18:30:51.982561  276879 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:30:51.099020  277941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:30:51.099058  277941 machine.go:96] duration metric: took 4.622557826s to provisionDockerMachine
	I1016 18:30:51.099076  277941 start.go:293] postStartSetup for "default-k8s-diff-port-523257" (driver="docker")
	I1016 18:30:51.099091  277941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:30:51.099168  277941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:30:51.099218  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:51.125245  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:51.224177  277941 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:30:51.228358  277941 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:30:51.228391  277941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:30:51.228404  277941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:30:51.228473  277941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:30:51.228581  277941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:30:51.228757  277941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:30:51.239433  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:51.261437  277941 start.go:296] duration metric: took 162.345111ms for postStartSetup
	I1016 18:30:51.261523  277941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:30:51.261591  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:51.285115  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:51.387241  277941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:30:51.392567  277941 fix.go:56] duration metric: took 5.300642268s for fixHost
	I1016 18:30:51.392597  277941 start.go:83] releasing machines lock for "default-k8s-diff-port-523257", held for 5.300695166s
	I1016 18:30:51.392676  277941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523257
	I1016 18:30:51.415523  277941 ssh_runner.go:195] Run: cat /version.json
	I1016 18:30:51.415581  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:51.415649  277941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:30:51.415738  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:51.441092  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:51.447418  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:51.604624  277941 ssh_runner.go:195] Run: systemctl --version
	I1016 18:30:51.612226  277941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:30:51.655832  277941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:30:51.661156  277941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:30:51.661223  277941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:30:51.671840  277941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1016 18:30:51.671870  277941 start.go:495] detecting cgroup driver to use...
	I1016 18:30:51.671905  277941 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:30:51.671966  277941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:30:51.693903  277941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:30:51.708294  277941 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:30:51.708358  277941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:30:51.725630  277941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:30:51.739028  277941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:30:51.838059  277941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:30:51.926779  277941 docker.go:234] disabling docker service ...
	I1016 18:30:51.926865  277941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:30:51.943057  277941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:30:51.956843  277941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:30:52.049647  277941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:30:52.143330  277941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:30:52.156846  277941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:30:52.172263  277941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:30:52.172334  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.182077  277941 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:30:52.182147  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.191565  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.200823  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.210702  277941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:30:52.219952  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.229274  277941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.238806  277941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:30:52.252444  277941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:30:52.260856  277941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:30:52.268768  277941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:52.356903  277941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:30:52.479918  277941 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:30:52.480040  277941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:30:52.484339  277941 start.go:563] Will wait 60s for crictl version
	I1016 18:30:52.484405  277941 ssh_runner.go:195] Run: which crictl
	I1016 18:30:52.488093  277941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:30:52.514132  277941 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:30:52.514200  277941 ssh_runner.go:195] Run: crio --version
	I1016 18:30:52.545003  277941 ssh_runner.go:195] Run: crio --version
	I1016 18:30:52.577531  277941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:30:52.578868  277941 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523257 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:52.597824  277941 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1016 18:30:52.602327  277941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:52.613667  277941 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:30:52.613795  277941 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:52.613849  277941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:52.652101  277941 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:52.652122  277941 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:30:52.652172  277941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:52.680041  277941 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:52.680064  277941 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:30:52.680074  277941 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1016 18:30:52.680183  277941 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523257 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:30:52.680261  277941 ssh_runner.go:195] Run: crio config
	I1016 18:30:52.728076  277941 cni.go:84] Creating CNI manager for ""
	I1016 18:30:52.728106  277941 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:52.728126  277941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:30:52.728156  277941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523257 NodeName:default-k8s-diff-port-523257 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:30:52.728317  277941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523257"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:30:52.728394  277941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:30:52.736843  277941 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:30:52.736904  277941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:30:52.744746  277941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1016 18:30:52.756960  277941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:30:52.770347  277941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1016 18:30:52.783930  277941 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:30:52.787691  277941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:52.798172  277941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:52.878609  277941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:52.904691  277941 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257 for IP: 192.168.85.2
	I1016 18:30:52.904729  277941 certs.go:195] generating shared ca certs ...
	I1016 18:30:52.904747  277941 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:52.904900  277941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:30:52.904953  277941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:30:52.904964  277941 certs.go:257] generating profile certs ...
	I1016 18:30:52.905068  277941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/client.key
	I1016 18:30:52.905142  277941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key.0a5b079c
	I1016 18:30:52.905200  277941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key
	I1016 18:30:52.905363  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:30:52.905401  277941 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:30:52.905414  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:30:52.905443  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:30:52.905472  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:30:52.905501  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:30:52.905552  277941 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:52.906313  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:30:52.925824  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:30:52.947126  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:30:52.967641  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:30:52.993931  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1016 18:30:53.013657  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:30:53.031482  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:30:53.049750  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/default-k8s-diff-port-523257/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:30:53.067175  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:30:53.084537  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:30:53.103855  277941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:30:53.121966  277941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:30:53.135633  277941 ssh_runner.go:195] Run: openssl version
	I1016 18:30:53.141899  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:30:53.150376  277941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:53.154024  277941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:53.154064  277941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:53.189386  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:30:53.198408  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:30:53.208123  277941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.212874  277941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.212938  277941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.252568  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:30:53.261265  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:30:53.270160  277941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:30:53.274174  277941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:30:53.274223  277941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:30:53.309441  277941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:30:53.317886  277941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:30:53.322083  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1016 18:30:53.356052  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1016 18:30:53.390524  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1016 18:30:53.435285  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1016 18:30:53.488865  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1016 18:30:53.541428  277941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1016 18:30:53.600940  277941 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-523257 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-523257 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:53.601038  277941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:30:53.601105  277941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:30:53.635865  277941 cri.go:89] found id: "04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7"
	I1016 18:30:53.635887  277941 cri.go:89] found id: "0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f"
	I1016 18:30:53.635893  277941 cri.go:89] found id: "b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a"
	I1016 18:30:53.635897  277941 cri.go:89] found id: "9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05"
	I1016 18:30:53.635902  277941 cri.go:89] found id: ""
	I1016 18:30:53.635943  277941 ssh_runner.go:195] Run: sudo runc list -f json
	W1016 18:30:53.652793  277941 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:30:53Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:30:53.652860  277941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:30:53.662762  277941 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1016 18:30:53.662782  277941 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1016 18:30:53.662827  277941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1016 18:30:53.671115  277941 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:30:53.672147  277941 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-523257" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:53.672933  277941 kubeconfig.go:62] /home/jenkins/minikube-integration/21738-8849/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-523257" cluster setting kubeconfig missing "default-k8s-diff-port-523257" context setting]
	I1016 18:30:53.674100  277941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.676332  277941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1016 18:30:53.684951  277941 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1016 18:30:53.685145  277941 kubeadm.go:601] duration metric: took 22.209334ms to restartPrimaryControlPlane
	I1016 18:30:53.685160  277941 kubeadm.go:402] duration metric: took 84.229623ms to StartCluster
	I1016 18:30:53.685230  277941 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.685384  277941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:30:53.687362  277941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.687656  277941 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:30:53.687741  277941 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:30:53.687852  277941 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:30:53.687863  277941 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-523257"
	I1016 18:30:53.687881  277941 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-523257"
	I1016 18:30:53.687882  277941 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523257"
	W1016 18:30:53.687893  277941 addons.go:247] addon dashboard should already be in state true
	I1016 18:30:53.687852  277941 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523257"
	I1016 18:30:53.687903  277941 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523257"
	I1016 18:30:53.687914  277941 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523257"
	W1016 18:30:53.687924  277941 addons.go:247] addon storage-provisioner should already be in state true
	I1016 18:30:53.687927  277941 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:30:53.687945  277941 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:30:53.688244  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:53.688356  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:53.688419  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:53.691004  277941 out.go:179] * Verifying Kubernetes components...
	I1016 18:30:53.692580  277941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:53.717206  277941 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523257"
	W1016 18:30:53.717292  277941 addons.go:247] addon default-storageclass should already be in state true
	I1016 18:30:53.717328  277941 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:30:53.717829  277941 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:30:53.718507  277941 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:30:53.719864  277941 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1016 18:30:53.719932  277941 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:53.719948  277941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:30:53.720021  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:53.722975  277941 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1016 18:30:51.173801  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:51.174196  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:51.174253  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1016 18:30:51.174314  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1016 18:30:51.208953  228782 cri.go:89] found id: "0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:51.208973  228782 cri.go:89] found id: ""
	I1016 18:30:51.208981  228782 logs.go:282] 1 containers: [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98]
	I1016 18:30:51.209029  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:51.213380  228782 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1016 18:30:51.213450  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1016 18:30:51.244211  228782 cri.go:89] found id: ""
	I1016 18:30:51.244242  228782 logs.go:282] 0 containers: []
	W1016 18:30:51.244253  228782 logs.go:284] No container was found matching "etcd"
	I1016 18:30:51.244261  228782 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1016 18:30:51.244332  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1016 18:30:51.274900  228782 cri.go:89] found id: ""
	I1016 18:30:51.274929  228782 logs.go:282] 0 containers: []
	W1016 18:30:51.274941  228782 logs.go:284] No container was found matching "coredns"
	I1016 18:30:51.274949  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1016 18:30:51.275014  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1016 18:30:51.307064  228782 cri.go:89] found id: "873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:51.307086  228782 cri.go:89] found id: ""
	I1016 18:30:51.307095  228782 logs.go:282] 1 containers: [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1]
	I1016 18:30:51.307156  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:51.311310  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1016 18:30:51.311384  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1016 18:30:51.347509  228782 cri.go:89] found id: ""
	I1016 18:30:51.347539  228782 logs.go:282] 0 containers: []
	W1016 18:30:51.347551  228782 logs.go:284] No container was found matching "kube-proxy"
	I1016 18:30:51.347558  228782 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1016 18:30:51.347625  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1016 18:30:51.376948  228782 cri.go:89] found id: "5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:51.376981  228782 cri.go:89] found id: ""
	I1016 18:30:51.376990  228782 logs.go:282] 1 containers: [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a]
	I1016 18:30:51.377038  228782 ssh_runner.go:195] Run: which crictl
	I1016 18:30:51.381765  228782 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1016 18:30:51.381834  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1016 18:30:51.413380  228782 cri.go:89] found id: ""
	I1016 18:30:51.413409  228782 logs.go:282] 0 containers: []
	W1016 18:30:51.413419  228782 logs.go:284] No container was found matching "kindnet"
	I1016 18:30:51.413426  228782 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1016 18:30:51.413485  228782 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1016 18:30:51.455101  228782 cri.go:89] found id: ""
	I1016 18:30:51.455129  228782 logs.go:282] 0 containers: []
	W1016 18:30:51.455139  228782 logs.go:284] No container was found matching "storage-provisioner"
	I1016 18:30:51.455149  228782 logs.go:123] Gathering logs for container status ...
	I1016 18:30:51.455166  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1016 18:30:51.491676  228782 logs.go:123] Gathering logs for kubelet ...
	I1016 18:30:51.491710  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1016 18:30:51.592066  228782 logs.go:123] Gathering logs for dmesg ...
	I1016 18:30:51.592088  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1016 18:30:51.608198  228782 logs.go:123] Gathering logs for describe nodes ...
	I1016 18:30:51.608226  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1016 18:30:51.676856  228782 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1016 18:30:51.676884  228782 logs.go:123] Gathering logs for kube-apiserver [0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98] ...
	I1016 18:30:51.676899  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 0dd262ad29367cfc2411ba4e8993974689b427bc592afc7b162219fa53f85a98"
	I1016 18:30:51.720481  228782 logs.go:123] Gathering logs for kube-scheduler [873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1] ...
	I1016 18:30:51.720511  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 873d40a87af0c3f8f3266b4621c056e591546eaa4e95e47b1bf43039b963ebe1"
	I1016 18:30:51.788194  228782 logs.go:123] Gathering logs for kube-controller-manager [5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a] ...
	I1016 18:30:51.788237  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5991e1b8e4c215f2ab170c898369cfb4a216560a0b28010f6e7e2af8d6d1bf1a"
	I1016 18:30:51.819507  228782 logs.go:123] Gathering logs for CRI-O ...
	I1016 18:30:51.819538  228782 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1016 18:30:54.390255  228782 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:30:54.390697  228782 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1016 18:30:54.390783  228782 kubeadm.go:601] duration metric: took 4m4.963407806s to restartPrimaryControlPlane
	W1016 18:30:54.390844  228782 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1016 18:30:54.390906  228782 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1016 18:30:54.965781  228782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:54.982060  228782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:30:54.992398  228782 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:30:54.992461  228782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:30:55.000831  228782 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:30:55.000856  228782 kubeadm.go:157] found existing configuration files:
	
	I1016 18:30:55.000900  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:30:55.010360  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:30:55.010422  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:30:55.019313  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:30:55.028039  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:30:55.028106  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:30:55.036876  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:30:55.047190  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:30:55.047254  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:30:55.058861  228782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:30:55.070043  228782 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:30:55.070106  228782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:30:55.078209  228782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:30:55.114473  228782 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:30:55.114545  228782 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:30:55.137956  228782 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:30:55.138082  228782 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:30:55.138161  228782 kubeadm.go:318] OS: Linux
	I1016 18:30:55.138241  228782 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:30:55.138312  228782 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:30:55.138383  228782 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:30:55.138474  228782 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:30:55.138546  228782 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:30:55.138609  228782 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:30:55.138692  228782 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:30:55.138779  228782 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:30:55.212312  228782 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:30:55.212493  228782 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:30:55.212629  228782 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:30:55.220395  228782 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:30:53.724396  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1016 18:30:53.724416  277941 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1016 18:30:53.724479  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:53.751316  277941 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:53.751340  277941 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:30:53.751476  277941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:30:53.752385  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:53.754449  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:53.779808  277941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:30:53.849365  277941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:53.867036  277941 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523257" to be "Ready" ...
	I1016 18:30:53.876696  277941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:30:53.884560  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1016 18:30:53.884588  277941 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1016 18:30:53.897188  277941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:30:53.901846  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1016 18:30:53.901953  277941 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1016 18:30:53.918939  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1016 18:30:53.919049  277941 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1016 18:30:53.939408  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1016 18:30:53.939429  277941 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1016 18:30:53.959327  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1016 18:30:53.959397  277941 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1016 18:30:53.978664  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1016 18:30:53.978682  277941 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1016 18:30:53.996244  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1016 18:30:53.996284  277941 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1016 18:30:54.014836  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1016 18:30:54.014860  277941 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1016 18:30:54.031190  277941 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:54.031213  277941 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1016 18:30:54.048036  277941 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1016 18:30:55.539235  277941 node_ready.go:49] node "default-k8s-diff-port-523257" is "Ready"
	I1016 18:30:55.539274  277941 node_ready.go:38] duration metric: took 1.672203322s for node "default-k8s-diff-port-523257" to be "Ready" ...
	I1016 18:30:55.539291  277941 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:30:55.539350  277941 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:30:51.984331  276879 cli_runner.go:164] Run: docker network inspect auto-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:30:52.006754  276879 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1016 18:30:52.011142  276879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:52.022148  276879 kubeadm.go:883] updating cluster {Name:auto-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:30:52.022287  276879 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:30:52.022330  276879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:52.055910  276879 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:52.055931  276879 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:30:52.055974  276879 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:30:52.090546  276879 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:30:52.090574  276879 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:30:52.090584  276879 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1016 18:30:52.090695  276879 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-084411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1016 18:30:52.090791  276879 ssh_runner.go:195] Run: crio config
	I1016 18:30:52.139239  276879 cni.go:84] Creating CNI manager for ""
	I1016 18:30:52.139277  276879 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1016 18:30:52.139298  276879 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:30:52.139323  276879 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-084411 NodeName:auto-084411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:30:52.139468  276879 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-084411"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:30:52.139539  276879 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:30:52.148614  276879 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:30:52.148680  276879 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:30:52.157101  276879 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1016 18:30:52.171284  276879 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:30:52.187864  276879 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1016 18:30:52.201096  276879 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:30:52.205247  276879 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:30:52.215883  276879 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:30:52.305634  276879 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:30:52.340694  276879 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411 for IP: 192.168.94.2
	I1016 18:30:52.340750  276879 certs.go:195] generating shared ca certs ...
	I1016 18:30:52.340771  276879 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:52.340929  276879 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:30:52.340969  276879 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:30:52.340988  276879 certs.go:257] generating profile certs ...
	I1016 18:30:52.341046  276879 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.key
	I1016 18:30:52.341066  276879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.crt with IP's: []
	I1016 18:30:52.582998  276879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.crt ...
	I1016 18:30:52.583026  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.crt: {Name:mkab2e78be3b59984a0ba578f4dc95de207a69f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:52.583186  276879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.key ...
	I1016 18:30:52.583197  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/client.key: {Name:mkeecb19adda3e6ccbea3f5241772d5dc766be33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:52.583282  276879 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key.37baba6b
	I1016 18:30:52.583297  276879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt.37baba6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1016 18:30:53.184579  276879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt.37baba6b ...
	I1016 18:30:53.184608  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt.37baba6b: {Name:mk8d71bd1c293c7c586e0b5e16246014793fe2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.184812  276879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key.37baba6b ...
	I1016 18:30:53.184833  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key.37baba6b: {Name:mk767a26233cd4eb40b7e9db71fbe1d472a61556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.184944  276879 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt.37baba6b -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt
	I1016 18:30:53.185051  276879 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key.37baba6b -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key
	I1016 18:30:53.185132  276879 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.key
	I1016 18:30:53.185157  276879 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.crt with IP's: []
	I1016 18:30:53.560178  276879 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.crt ...
	I1016 18:30:53.560213  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.crt: {Name:mk7c557669b441664449d09c37e81969b32fe359 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.560417  276879 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.key ...
	I1016 18:30:53.560430  276879 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.key: {Name:mk753601de48fbdac05de6cd8e369227329c385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:30:53.560680  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:30:53.560791  276879 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:30:53.560809  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:30:53.560845  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:30:53.560879  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:30:53.560915  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:30:53.560975  276879 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:30:53.561559  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:30:53.586552  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:30:53.605352  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:30:53.627600  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:30:53.648648  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1016 18:30:53.669898  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:30:53.692865  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:30:53.721303  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/auto-084411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:30:53.754904  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:30:53.786211  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:30:53.812371  276879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:30:53.834198  276879 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:30:53.849973  276879 ssh_runner.go:195] Run: openssl version
	I1016 18:30:53.856934  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:30:53.868253  276879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.873059  276879 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.873125  276879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:30:53.932359  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:30:53.944704  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:30:53.956347  276879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:30:53.962090  276879 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:30:53.962149  276879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:30:54.010264  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:30:54.021701  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:30:54.030762  276879 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:54.035608  276879 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:54.035665  276879 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:30:54.079238  276879 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:30:54.089454  276879 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:30:54.093766  276879 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:30:54.093835  276879 kubeadm.go:400] StartCluster: {Name:auto-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:30:54.093925  276879 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:30:54.093982  276879 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:30:54.127819  276879 cri.go:89] found id: ""
	I1016 18:30:54.127881  276879 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:30:54.139272  276879 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:30:54.148209  276879 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:30:54.148276  276879 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:30:54.156520  276879 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:30:54.156537  276879 kubeadm.go:157] found existing configuration files:
	
	I1016 18:30:54.156578  276879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:30:54.165728  276879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:30:54.165812  276879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:30:54.174427  276879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:30:54.182976  276879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:30:54.183038  276879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:30:54.191587  276879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:30:54.199644  276879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:30:54.199699  276879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:30:54.208542  276879 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:30:54.216721  276879 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:30:54.216782  276879 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:30:54.224533  276879 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:30:54.291379  276879 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:30:54.353758  276879 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:30:56.110892  277941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.234160749s)
	I1016 18:30:56.110932  277941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.213719509s)
	I1016 18:30:56.111075  277941 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.063001345s)
	I1016 18:30:56.111123  277941 api_server.go:72] duration metric: took 2.423428472s to wait for apiserver process to appear ...
	I1016 18:30:56.111148  277941 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:30:56.111169  277941 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1016 18:30:56.112643  277941 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-523257 addons enable metrics-server
	
	I1016 18:30:56.115582  277941 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:56.115606  277941 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:56.118151  277941 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1016 18:30:55.224473  228782 out.go:252]   - Generating certificates and keys ...
	I1016 18:30:55.224591  228782 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:30:55.224675  228782 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:30:55.224775  228782 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1016 18:30:55.224833  228782 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1016 18:30:55.224901  228782 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1016 18:30:55.224993  228782 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1016 18:30:55.225088  228782 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1016 18:30:55.225179  228782 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1016 18:30:55.225274  228782 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1016 18:30:55.225379  228782 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1016 18:30:55.225443  228782 kubeadm.go:318] [certs] Using the existing "sa" key
	I1016 18:30:55.225519  228782 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:30:55.655586  228782 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:30:56.187454  228782 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:30:56.231319  228782 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:30:56.328007  228782 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:30:56.448576  228782 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:30:56.449145  228782 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:30:56.453292  228782 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:30:56.455270  228782 out.go:252]   - Booting up control plane ...
	I1016 18:30:56.455357  228782 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:30:56.455423  228782 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:30:56.455476  228782 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:30:56.469929  228782 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:30:56.470040  228782 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:30:56.478577  228782 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:30:56.478968  228782 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:30:56.479043  228782 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:30:56.599700  228782 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:30:56.599878  228782 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:30:57.101051  228782 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.398101ms
	I1016 18:30:57.105342  228782 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:30:57.105482  228782 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1016 18:30:57.105619  228782 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:30:57.105741  228782 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:30:56.119356  277941 addons.go:514] duration metric: took 2.431642352s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1016 18:30:56.611882  277941 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1016 18:30:56.617379  277941 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1016 18:30:56.617417  277941 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1016 18:30:57.112167  277941 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1016 18:30:57.116166  277941 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1016 18:30:57.117200  277941 api_server.go:141] control plane version: v1.34.1
	I1016 18:30:57.117224  277941 api_server.go:131] duration metric: took 1.006069235s to wait for apiserver health ...
	I1016 18:30:57.117234  277941 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:30:57.120495  277941 system_pods.go:59] 8 kube-system pods found
	I1016 18:30:57.120541  277941 system_pods.go:61] "coredns-66bc5c9577-jx8q2" [038605d0-574f-4f02-8695-cc80a08e2e43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:30:57.120556  277941 system_pods.go:61] "etcd-default-k8s-diff-port-523257" [04c84db8-f1b8-4d12-b0f1-7f4e413239c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:30:57.120564  277941 system_pods.go:61] "kindnet-bctzw" [a71883f8-793b-41d1-bbad-1c47e65b7768] Running
	I1016 18:30:57.120573  277941 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-523257" [b0589c58-6c52-496e-ba41-477f8db6653f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:30:57.120587  277941 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-523257" [8fe116b0-2f61-4e27-9520-21d74c753947] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:30:57.120596  277941 system_pods.go:61] "kube-proxy-hrdcg" [2ddde19a-7b12-4815-8e04-38066f73935e] Running
	I1016 18:30:57.120608  277941 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-523257" [50baf435-33db-45cc-8c89-c599df7d5e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:30:57.120616  277941 system_pods.go:61] "storage-provisioner" [5fa5cdd4-25fd-4a41-9e29-ae166842b3ca] Running
	I1016 18:30:57.120624  277941 system_pods.go:74] duration metric: took 3.382834ms to wait for pod list to return data ...
	I1016 18:30:57.120635  277941 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:30:57.123360  277941 default_sa.go:45] found service account: "default"
	I1016 18:30:57.123383  277941 default_sa.go:55] duration metric: took 2.741338ms for default service account to be created ...
	I1016 18:30:57.123392  277941 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:30:57.126239  277941 system_pods.go:86] 8 kube-system pods found
	I1016 18:30:57.126275  277941 system_pods.go:89] "coredns-66bc5c9577-jx8q2" [038605d0-574f-4f02-8695-cc80a08e2e43] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:30:57.126288  277941 system_pods.go:89] "etcd-default-k8s-diff-port-523257" [04c84db8-f1b8-4d12-b0f1-7f4e413239c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1016 18:30:57.126295  277941 system_pods.go:89] "kindnet-bctzw" [a71883f8-793b-41d1-bbad-1c47e65b7768] Running
	I1016 18:30:57.126305  277941 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-523257" [b0589c58-6c52-496e-ba41-477f8db6653f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:30:57.126320  277941 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-523257" [8fe116b0-2f61-4e27-9520-21d74c753947] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1016 18:30:57.126325  277941 system_pods.go:89] "kube-proxy-hrdcg" [2ddde19a-7b12-4815-8e04-38066f73935e] Running
	I1016 18:30:57.126334  277941 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-523257" [50baf435-33db-45cc-8c89-c599df7d5e37] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1016 18:30:57.126341  277941 system_pods.go:89] "storage-provisioner" [5fa5cdd4-25fd-4a41-9e29-ae166842b3ca] Running
	I1016 18:30:57.126358  277941 system_pods.go:126] duration metric: took 2.959838ms to wait for k8s-apps to be running ...
	I1016 18:30:57.126371  277941 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:30:57.126423  277941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:30:57.141455  277941 system_svc.go:56] duration metric: took 15.077597ms WaitForService to wait for kubelet
	I1016 18:30:57.141483  277941 kubeadm.go:586] duration metric: took 3.453789686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:30:57.141503  277941 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:30:57.144791  277941 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:30:57.144827  277941 node_conditions.go:123] node cpu capacity is 8
	I1016 18:30:57.144844  277941 node_conditions.go:105] duration metric: took 3.336341ms to run NodePressure ...
	I1016 18:30:57.144859  277941 start.go:241] waiting for startup goroutines ...
	I1016 18:30:57.144869  277941 start.go:246] waiting for cluster config update ...
	I1016 18:30:57.144882  277941 start.go:255] writing updated cluster config ...
	I1016 18:30:57.145190  277941 ssh_runner.go:195] Run: rm -f paused
	I1016 18:30:57.149369  277941 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:30:57.152614  277941 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jx8q2" in "kube-system" namespace to be "Ready" or be gone ...
	W1016 18:30:59.159050  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	I1016 18:30:59.118055  228782 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.012746994s
	I1016 18:30:59.825609  228782 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.720210195s
	I1016 18:31:01.106886  228782 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001512558s
	I1016 18:31:01.121829  228782 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:31:01.138856  228782 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:31:01.156366  228782 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:31:01.156672  228782 kubeadm.go:318] [mark-control-plane] Marking the node kubernetes-upgrade-750025 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:31:01.169095  228782 kubeadm.go:318] [bootstrap-token] Using token: 68ox7o.g0497jgupjn67ig6
	
	
	==> CRI-O <==
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.472977355Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.476678634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.476706027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.616244524Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1513d78e-43e1-45ba-9f1e-69ee4aa0c059 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.617342321Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c3b24672-32da-40ff-9a48-96494186b3b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.618877811Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=8a840319-cb3b-44e6-86bf-060c1fb4883c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.61920062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.626795769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.627485857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.661105378Z" level=info msg="Created container 6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=8a840319-cb3b-44e6-86bf-060c1fb4883c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.661859575Z" level=info msg="Starting container: 6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20" id=2fac6ece-673c-43f6-aca1-105b8e056e09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.663826132Z" level=info msg="Started container" PID=1763 containerID=6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper id=2fac6ece-673c-43f6-aca1-105b8e056e09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5885c8efe72c7b158de7f4cd9442ae699693169165af187e6c1229761beedd3b
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.741929051Z" level=info msg="Removing container: c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64" id=98313a10-19e2-45f2-85de-e4f7481d8e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.758202568Z" level=info msg="Removed container c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=98313a10-19e2-45f2-85de-e4f7481d8e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.744032118Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b45ae9a0-a84d-4c40-9b66-96f42013013c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.744926367Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bed6b915-cec8-4cb1-ae87-d6be6e374fdd name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.746563199Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=76f5b17e-e43b-4857-b74d-f5e1271f4cbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.746904608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.751865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752069643Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9b3eaf128f09dec10d644242622fa64dbd3523f91ea6c98caae10cbb57fbe56d/merged/etc/passwd: no such file or directory"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752096347Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9b3eaf128f09dec10d644242622fa64dbd3523f91ea6c98caae10cbb57fbe56d/merged/etc/group: no such file or directory"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752314384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.790091563Z" level=info msg="Created container f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61: kube-system/storage-provisioner/storage-provisioner" id=76f5b17e-e43b-4857-b74d-f5e1271f4cbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.790779778Z" level=info msg="Starting container: f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61" id=da1dd319-bb5b-495a-ae37-29e3b2f2ecca name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.793043029Z" level=info msg="Started container" PID=1777 containerID=f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61 description=kube-system/storage-provisioner/storage-provisioner id=da1dd319-bb5b-495a-ae37-29e3b2f2ecca name=/runtime.v1.RuntimeService/StartContainer sandboxID=49e6d6c5bcfd491e069e47cf75af4c20d955114e63cfdd67649ee2422fd773a8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f128332ce9a15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   49e6d6c5bcfd4       storage-provisioner                          kube-system
	6343af61af526       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   5885c8efe72c7       dashboard-metrics-scraper-6ffb444bf9-g2nfx   kubernetes-dashboard
	44cbf419b1a42       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   120ed500c6092       kubernetes-dashboard-855c9754f9-tlp4f        kubernetes-dashboard
	ec8c24b028879       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   905f80dabc14f       coredns-66bc5c9577-v85b5                     kube-system
	8af694c901923       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   8d50dc1f0b5fe       busybox                                      default
	8594c5daefcc9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   85dad5763028b       kindnet-9qp8q                                kube-system
	580a1955626de       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   1027e96e51c51       kube-proxy-rsvq2                             kube-system
	86ca4639090df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   49e6d6c5bcfd4       storage-provisioner                          kube-system
	3e0c4612dffa1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   25eae8f42b3bf       kube-scheduler-embed-certs-063117            kube-system
	121a4f69e5a4e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   bb3e5ef3b9889       kube-controller-manager-embed-certs-063117   kube-system
	06ca051cf2af9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   8e024bba50ef1       etcd-embed-certs-063117                      kube-system
	2beb45b096476       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   e2d18b41a2e66       kube-apiserver-embed-certs-063117            kube-system
	
	
	==> coredns [ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46088 - 57642 "HINFO IN 5900581273714567931.8440220015163165932. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.177086974s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-063117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-063117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-063117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-063117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-063117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                70725f86-975b-492e-a584-749604224fc0
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-v85b5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-063117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-9qp8q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-063117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-063117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-rsvq2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-063117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g2nfx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tlp4f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node embed-certs-063117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node embed-certs-063117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node embed-certs-063117 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node embed-certs-063117 event: Registered Node embed-certs-063117 in Controller
	  Normal  NodeReady                92s                kubelet          Node embed-certs-063117 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-063117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-063117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-063117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-063117 event: Registered Node embed-certs-063117 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba] <==
	{"level":"warn","ts":"2025-10-16T18:30:10.625547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.633196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.640679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.647790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.654653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.664891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.671448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.677755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.735352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:30:19.940563Z","caller":"traceutil/trace.go:172","msg":"trace[1899419289] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"171.78072ms","start":"2025-10-16T18:30:19.768763Z","end":"2025-10-16T18:30:19.940544Z","steps":["trace[1899419289] 'process raft request'  (duration: 130.322215ms)","trace[1899419289] 'compare'  (duration: 41.355572ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:43.920581Z","caller":"traceutil/trace.go:172","msg":"trace[633025507] linearizableReadLoop","detail":"{readStateIndex:651; appliedIndex:651; }","duration":"161.517638ms","start":"2025-10-16T18:30:43.759015Z","end":"2025-10-16T18:30:43.920533Z","steps":["trace[633025507] 'read index received'  (duration: 161.507296ms)","trace[633025507] 'applied index is now lower than readState.Index'  (duration: 8.391µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:43.920805Z","caller":"traceutil/trace.go:172","msg":"trace[2102154994] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"162.514815ms","start":"2025-10-16T18:30:43.758277Z","end":"2025-10-16T18:30:43.920792Z","steps":["trace[2102154994] 'process raft request'  (duration: 162.351894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:43.920822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.784194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-16T18:30:43.920892Z","caller":"traceutil/trace.go:172","msg":"trace[520197732] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:615; }","duration":"161.876308ms","start":"2025-10-16T18:30:43.759004Z","end":"2025-10-16T18:30:43.920881Z","steps":["trace[520197732] 'agreement among raft nodes before linearized reading'  (duration: 161.664724ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.447892Z","caller":"traceutil/trace.go:172","msg":"trace[588536919] linearizableReadLoop","detail":"{readStateIndex:652; appliedIndex:652; }","duration":"188.997378ms","start":"2025-10-16T18:30:44.258868Z","end":"2025-10-16T18:30:44.447865Z","steps":["trace[588536919] 'read index received'  (duration: 188.989579ms)","trace[588536919] 'applied index is now lower than readState.Index'  (duration: 6.4µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T18:30:44.448110Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.218254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-16T18:30:44.448110Z","caller":"traceutil/trace.go:172","msg":"trace[2114708479] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"282.129583ms","start":"2025-10-16T18:30:44.165967Z","end":"2025-10-16T18:30:44.448097Z","steps":["trace[2114708479] 'process raft request'  (duration: 281.962054ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.448144Z","caller":"traceutil/trace.go:172","msg":"trace[1486949101] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:616; }","duration":"189.272208ms","start":"2025-10-16T18:30:44.258863Z","end":"2025-10-16T18:30:44.448136Z","steps":["trace[1486949101] 'agreement among raft nodes before linearized reading'  (duration: 189.084668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:44.723648Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.01798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789375803562153 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:530 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-16T18:30:44.724182Z","caller":"traceutil/trace.go:172","msg":"trace[1682595931] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"269.490806ms","start":"2025-10-16T18:30:44.454677Z","end":"2025-10-16T18:30:44.724168Z","steps":["trace[1682595931] 'process raft request'  (duration: 269.268919ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.724358Z","caller":"traceutil/trace.go:172","msg":"trace[548296526] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"271.608076ms","start":"2025-10-16T18:30:44.452732Z","end":"2025-10-16T18:30:44.724340Z","steps":["trace[548296526] 'process raft request'  (duration: 131.247438ms)","trace[548296526] 'compare'  (duration: 138.906456ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:44.724399Z","caller":"traceutil/trace.go:172","msg":"trace[2076870443] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"271.677348ms","start":"2025-10-16T18:30:44.452711Z","end":"2025-10-16T18:30:44.724389Z","steps":["trace[2076870443] 'process raft request'  (duration: 271.142545ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:44.894337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.065603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2025-10-16T18:30:44.894356Z","caller":"traceutil/trace.go:172","msg":"trace[1767115820] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"162.585028ms","start":"2025-10-16T18:30:44.731752Z","end":"2025-10-16T18:30:44.894337Z","steps":["trace[1767115820] 'process raft request'  (duration: 127.135717ms)","trace[1767115820] 'compare'  (duration: 35.324366ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:44.894395Z","caller":"traceutil/trace.go:172","msg":"trace[490066829] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:620; }","duration":"135.14042ms","start":"2025-10-16T18:30:44.759241Z","end":"2025-10-16T18:30:44.894382Z","steps":["trace[490066829] 'agreement among raft nodes before linearized reading'  (duration: 99.59738ms)","trace[490066829] 'range keys from in-memory index tree'  (duration: 35.368553ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:31:02 up  1:13,  0 user,  load average: 5.79, 3.44, 2.07
	Linux embed-certs-063117 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8] <==
	I1016 18:30:12.249319       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:12.249567       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:30:12.249857       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:12.249879       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:12.249903       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:12.451828       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:12.451884       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:12.451896       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:12.452344       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:13.252583       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:13.252665       1 metrics.go:72] Registering metrics
	I1016 18:30:13.252770       1 controller.go:711] "Syncing nftables rules"
	I1016 18:30:22.451968       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:22.452041       1 main.go:301] handling current node
	I1016 18:30:32.460836       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:32.460891       1 main.go:301] handling current node
	I1016 18:30:42.451916       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:42.451971       1 main.go:301] handling current node
	I1016 18:30:52.454819       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:52.454871       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb] <==
	I1016 18:30:11.240903       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:30:11.240910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 18:30:11.240912       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:30:11.240950       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:30:11.240878       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 18:30:11.241550       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:11.241655       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:30:11.241692       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:30:11.241700       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:30:11.241706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:30:11.241737       1 cache.go:39] Caches are synced for autoregister controller
	E1016 18:30:11.247917       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:11.249368       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:30:11.283658       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:30:11.496825       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:11.530901       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:11.555514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:11.564305       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:11.571369       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:11.614475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.167.184"}
	I1016 18:30:11.635316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.178.3"}
	I1016 18:30:12.143680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:14.987922       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:30:15.041541       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:30:15.189410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885] <==
	I1016 18:30:14.565691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 18:30:14.565836       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 18:30:14.565871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 18:30:14.567045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:30:14.569373       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:30:14.572571       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:30:14.574252       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:30:14.575421       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:30:14.583004       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:30:14.583039       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:14.583411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:14.583448       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:30:14.583944       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:30:14.584346       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:30:14.584472       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:30:14.585035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:30:14.585117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 18:30:14.585490       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:30:14.587426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:14.588478       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:30:14.590771       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:30:14.590842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:14.593062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:30:14.595362       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:30:14.614001       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6] <==
	I1016 18:30:12.005986       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:12.065504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:12.166182       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:12.166224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1016 18:30:12.166342       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:12.187257       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:12.187342       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:12.193895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:12.194385       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:12.194401       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:12.195826       1 config.go:200] "Starting service config controller"
	I1016 18:30:12.195849       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:12.195856       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:12.195864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:12.195830       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:12.195892       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:12.195953       1 config.go:309] "Starting node config controller"
	I1016 18:30:12.195960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:12.195966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:12.296008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:12.296124       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:30:12.296133       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614] <==
	I1016 18:30:09.933772       1 serving.go:386] Generated self-signed cert in-memory
	W1016 18:30:11.185800       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 18:30:11.185873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 18:30:11.185888       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 18:30:11.185913       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 18:30:11.215193       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:30:11.215322       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:11.219665       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:11.219734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:11.220042       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:11.220097       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:30:11.319910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:30:14 embed-certs-063117 kubelet[726]: I1016 18:30:14.155778     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210347     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d1054e3f-7dbb-43e4-8225-5c9a66b292f9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-g2nfx\" (UID: \"d1054e3f-7dbb-43e4-8225-5c9a66b292f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210412     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bdl\" (UniqueName: \"kubernetes.io/projected/d1054e3f-7dbb-43e4-8225-5c9a66b292f9-kube-api-access-f7bdl\") pod \"dashboard-metrics-scraper-6ffb444bf9-g2nfx\" (UID: \"d1054e3f-7dbb-43e4-8225-5c9a66b292f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210482     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9g4\" (UniqueName: \"kubernetes.io/projected/28398c65-3e03-41f6-98a9-0e25b57ac960-kube-api-access-9h9g4\") pod \"kubernetes-dashboard-855c9754f9-tlp4f\" (UID: \"28398c65-3e03-41f6-98a9-0e25b57ac960\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210522     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/28398c65-3e03-41f6-98a9-0e25b57ac960-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tlp4f\" (UID: \"28398c65-3e03-41f6-98a9-0e25b57ac960\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f"
	Oct 16 18:30:18 embed-certs-063117 kubelet[726]: I1016 18:30:18.668082     726 scope.go:117] "RemoveContainer" containerID="ab64566d7f710fd583684a54e7f27603f023d63fb20ba75ded92230c36c12027"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: I1016 18:30:19.673977     726 scope.go:117] "RemoveContainer" containerID="ab64566d7f710fd583684a54e7f27603f023d63fb20ba75ded92230c36c12027"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: I1016 18:30:19.674398     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: E1016 18:30:19.674560     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:20 embed-certs-063117 kubelet[726]: I1016 18:30:20.678993     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:20 embed-certs-063117 kubelet[726]: E1016 18:30:20.679182     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:21 embed-certs-063117 kubelet[726]: I1016 18:30:21.885255     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f" podStartSLOduration=1.037354618 podStartE2EDuration="6.885228473s" podCreationTimestamp="2025-10-16 18:30:15 +0000 UTC" firstStartedPulling="2025-10-16 18:30:15.511515236 +0000 UTC m=+7.000937062" lastFinishedPulling="2025-10-16 18:30:21.359389092 +0000 UTC m=+12.848810917" observedRunningTime="2025-10-16 18:30:21.699297021 +0000 UTC m=+13.188718867" watchObservedRunningTime="2025-10-16 18:30:21.885228473 +0000 UTC m=+13.374650319"
	Oct 16 18:30:27 embed-certs-063117 kubelet[726]: I1016 18:30:27.964182     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:27 embed-certs-063117 kubelet[726]: E1016 18:30:27.964453     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.615681     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.739234     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.739492     726 scope.go:117] "RemoveContainer" containerID="6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: E1016 18:30:41.739692     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:42 embed-certs-063117 kubelet[726]: I1016 18:30:42.743595     726 scope.go:117] "RemoveContainer" containerID="86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	Oct 16 18:30:47 embed-certs-063117 kubelet[726]: I1016 18:30:47.963852     726 scope.go:117] "RemoveContainer" containerID="6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	Oct 16 18:30:47 embed-certs-063117 kubelet[726]: E1016 18:30:47.964117     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: kubelet.service: Consumed 1.717s CPU time.
	
	
	==> kubernetes-dashboard [44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a] <==
	2025/10/16 18:30:21 Using namespace: kubernetes-dashboard
	2025/10/16 18:30:21 Using in-cluster config to connect to apiserver
	2025/10/16 18:30:21 Using secret token for csrf signing
	2025/10/16 18:30:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:30:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:30:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:30:21 Generating JWE encryption key
	2025/10/16 18:30:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:30:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:30:21 Initializing JWE encryption key from synchronized object
	2025/10/16 18:30:21 Creating in-cluster Sidecar client
	2025/10/16 18:30:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:30:21 Serving insecurely on HTTP port: 9090
	2025/10/16 18:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:30:21 Starting overwatch
	
	
	==> storage-provisioner [86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213] <==
	I1016 18:30:11.980629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:30:41.985116       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61] <==
	I1016 18:30:42.805576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:30:42.813500       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:30:42.813537       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:30:42.816119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:46.272530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:50.533477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:54.132784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:57.186151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.211924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.217700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:00.217879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:31:00.218094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe!
	I1016 18:31:00.218405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e411a7bc-148f-42cf-bac0-dc17cef1cd44", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe became leader
	W1016 18:31:00.222761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.228324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:00.318295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe!
	W1016 18:31:02.242444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:02.248724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063117 -n embed-certs-063117: exit status 2 (428.43925ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-063117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-063117
helpers_test.go:243: (dbg) docker inspect embed-certs-063117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	        "Created": "2025-10-16T18:28:54.918690306Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265713,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:02.078894139Z",
	            "FinishedAt": "2025-10-16T18:30:01.196271488Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/hosts",
	        "LogPath": "/var/lib/docker/containers/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1/1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1-json.log",
	        "Name": "/embed-certs-063117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-063117:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-063117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fe6653a430a12f7e2f19104e2efbf311b6df20769a98ae5a9685386490f62e1",
	                "LowerDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b98c07b3e2c8bbba9f118db15e4186266a8da19f0536e0a0088d84b01fc366f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-063117",
	                "Source": "/var/lib/docker/volumes/embed-certs-063117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-063117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-063117",
	                "name.minikube.sigs.k8s.io": "embed-certs-063117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e58640c535d3b64ab12de6be448f4c02c1b8a8b8f550185407e51a3227d8b5d0",
	            "SandboxKey": "/var/run/docker/netns/e58640c535d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-063117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:60:3b:7f:ff:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d58ff291817e0d805fb2a74d398badc9c07572e1fefc22609c9ab31d677b2e36",
	                    "EndpointID": "14001818e798857b9d949e230b6d558100606593d367fd2a2e960ba374dda3ce",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-063117",
	                        "1fe6653a430a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117: exit status 2 (390.243812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-063117 logs -n 25: (1.497203208s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-808539 image list --format=json                                                                                                                                                                                                    │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ pause   │ -p no-preload-808539 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-063117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │                     │
	│ stop    │ -p embed-certs-063117 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ delete  │ -p no-preload-808539                                                                                                                                                                                                                          │ no-preload-808539            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:29 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:29 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p newest-cni-794682 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p newest-cni-794682 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523257 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-523257 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ image   │ newest-cni-794682 image list --format=json                                                                                                                                                                                                    │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p newest-cni-794682 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ delete  │ -p newest-cni-794682                                                                                                                                                                                                                          │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ delete  │ -p newest-cni-794682                                                                                                                                                                                                                          │ newest-cni-794682            │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p auto-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-523257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ start   │ -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ image   │ embed-certs-063117 image list --format=json                                                                                                                                                                                                   │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │ 16 Oct 25 18:30 UTC │
	│ pause   │ -p embed-certs-063117 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-063117           │ jenkins │ v1.37.0 │ 16 Oct 25 18:30 UTC │                     │
	│ start   │ -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-750025    │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:31:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:31:03.806667  283779 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:31:03.807831  283779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:03.807839  283779 out.go:374] Setting ErrFile to fd 2...
	I1016 18:31:03.807844  283779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:03.808188  283779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:31:03.808838  283779 out.go:368] Setting JSON to false
	I1016 18:31:03.810515  283779 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4412,"bootTime":1760635052,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:31:03.810588  283779 start.go:141] virtualization: kvm guest
	I1016 18:31:03.815470  283779 out.go:179] * [kubernetes-upgrade-750025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:31:03.817020  283779 notify.go:220] Checking for updates...
	I1016 18:31:03.819049  283779 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:31:03.820383  283779 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:31:03.821852  283779 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:31:03.827876  283779 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:31:03.829361  283779 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:31:03.830799  283779 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:31:03.833032  283779 config.go:182] Loaded profile config "kubernetes-upgrade-750025": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:03.835499  283779 out.go:203] 
	W1016 18:31:03.836937  283779 out.go:285] X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	W1016 18:31:03.837084  283779 out.go:285] * Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-750025
	    minikube start -p kubernetes-upgrade-750025 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7500252 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-750025 --kubernetes-version=v1.34.1
	    
	I1016 18:31:03.839379  283779 out.go:203] 
	
	
	==> CRI-O <==
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.472977355Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.476678634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 16 18:30:22 embed-certs-063117 crio[566]: time="2025-10-16T18:30:22.476706027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.616244524Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1513d78e-43e1-45ba-9f1e-69ee4aa0c059 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.617342321Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c3b24672-32da-40ff-9a48-96494186b3b2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.618877811Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=8a840319-cb3b-44e6-86bf-060c1fb4883c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.61920062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.626795769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.627485857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.661105378Z" level=info msg="Created container 6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=8a840319-cb3b-44e6-86bf-060c1fb4883c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.661859575Z" level=info msg="Starting container: 6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20" id=2fac6ece-673c-43f6-aca1-105b8e056e09 name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.663826132Z" level=info msg="Started container" PID=1763 containerID=6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper id=2fac6ece-673c-43f6-aca1-105b8e056e09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5885c8efe72c7b158de7f4cd9442ae699693169165af187e6c1229761beedd3b
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.741929051Z" level=info msg="Removing container: c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64" id=98313a10-19e2-45f2-85de-e4f7481d8e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:30:41 embed-certs-063117 crio[566]: time="2025-10-16T18:30:41.758202568Z" level=info msg="Removed container c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx/dashboard-metrics-scraper" id=98313a10-19e2-45f2-85de-e4f7481d8e73 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.744032118Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b45ae9a0-a84d-4c40-9b66-96f42013013c name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.744926367Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bed6b915-cec8-4cb1-ae87-d6be6e374fdd name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.746563199Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=76f5b17e-e43b-4857-b74d-f5e1271f4cbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.746904608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.751865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752069643Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9b3eaf128f09dec10d644242622fa64dbd3523f91ea6c98caae10cbb57fbe56d/merged/etc/passwd: no such file or directory"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752096347Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9b3eaf128f09dec10d644242622fa64dbd3523f91ea6c98caae10cbb57fbe56d/merged/etc/group: no such file or directory"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.752314384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.790091563Z" level=info msg="Created container f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61: kube-system/storage-provisioner/storage-provisioner" id=76f5b17e-e43b-4857-b74d-f5e1271f4cbf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.790779778Z" level=info msg="Starting container: f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61" id=da1dd319-bb5b-495a-ae37-29e3b2f2ecca name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:30:42 embed-certs-063117 crio[566]: time="2025-10-16T18:30:42.793043029Z" level=info msg="Started container" PID=1777 containerID=f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61 description=kube-system/storage-provisioner/storage-provisioner id=da1dd319-bb5b-495a-ae37-29e3b2f2ecca name=/runtime.v1.RuntimeService/StartContainer sandboxID=49e6d6c5bcfd491e069e47cf75af4c20d955114e63cfdd67649ee2422fd773a8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f128332ce9a15       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   49e6d6c5bcfd4       storage-provisioner                          kube-system
	6343af61af526       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   5885c8efe72c7       dashboard-metrics-scraper-6ffb444bf9-g2nfx   kubernetes-dashboard
	44cbf419b1a42       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   120ed500c6092       kubernetes-dashboard-855c9754f9-tlp4f        kubernetes-dashboard
	ec8c24b028879       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   905f80dabc14f       coredns-66bc5c9577-v85b5                     kube-system
	8af694c901923       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   8d50dc1f0b5fe       busybox                                      default
	8594c5daefcc9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   85dad5763028b       kindnet-9qp8q                                kube-system
	580a1955626de       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   1027e96e51c51       kube-proxy-rsvq2                             kube-system
	86ca4639090df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   49e6d6c5bcfd4       storage-provisioner                          kube-system
	3e0c4612dffa1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   25eae8f42b3bf       kube-scheduler-embed-certs-063117            kube-system
	121a4f69e5a4e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   bb3e5ef3b9889       kube-controller-manager-embed-certs-063117   kube-system
	06ca051cf2af9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   8e024bba50ef1       etcd-embed-certs-063117                      kube-system
	2beb45b096476       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   e2d18b41a2e66       kube-apiserver-embed-certs-063117            kube-system
	
	
	==> coredns [ec8c24b02887950550c5bfedba2b9c147d4462672b297fe7e1f23725f0ff2932] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46088 - 57642 "HINFO IN 5900581273714567931.8440220015163165932. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.177086974s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-063117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-063117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=embed-certs-063117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-063117
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:30:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:30:42 +0000   Thu, 16 Oct 2025 18:29:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-063117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                70725f86-975b-492e-a584-749604224fc0
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-v85b5                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-embed-certs-063117                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-9qp8q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-063117             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-063117    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-rsvq2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-063117             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-g2nfx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tlp4f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-063117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-063117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-063117 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-063117 event: Registered Node embed-certs-063117 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-063117 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-063117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-063117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-063117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-063117 event: Registered Node embed-certs-063117 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [06ca051cf2af9db9b9423a3d071cf2e2f07fed9b27fcff6325f04c31e90791ba] <==
	{"level":"warn","ts":"2025-10-16T18:30:10.625547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.633196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.640679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.647790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.654653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.664891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.671448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.677755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:10.735352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35398","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-16T18:30:19.940563Z","caller":"traceutil/trace.go:172","msg":"trace[1899419289] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"171.78072ms","start":"2025-10-16T18:30:19.768763Z","end":"2025-10-16T18:30:19.940544Z","steps":["trace[1899419289] 'process raft request'  (duration: 130.322215ms)","trace[1899419289] 'compare'  (duration: 41.355572ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:43.920581Z","caller":"traceutil/trace.go:172","msg":"trace[633025507] linearizableReadLoop","detail":"{readStateIndex:651; appliedIndex:651; }","duration":"161.517638ms","start":"2025-10-16T18:30:43.759015Z","end":"2025-10-16T18:30:43.920533Z","steps":["trace[633025507] 'read index received'  (duration: 161.507296ms)","trace[633025507] 'applied index is now lower than readState.Index'  (duration: 8.391µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:43.920805Z","caller":"traceutil/trace.go:172","msg":"trace[2102154994] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"162.514815ms","start":"2025-10-16T18:30:43.758277Z","end":"2025-10-16T18:30:43.920792Z","steps":["trace[2102154994] 'process raft request'  (duration: 162.351894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:43.920822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.784194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-16T18:30:43.920892Z","caller":"traceutil/trace.go:172","msg":"trace[520197732] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:615; }","duration":"161.876308ms","start":"2025-10-16T18:30:43.759004Z","end":"2025-10-16T18:30:43.920881Z","steps":["trace[520197732] 'agreement among raft nodes before linearized reading'  (duration: 161.664724ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.447892Z","caller":"traceutil/trace.go:172","msg":"trace[588536919] linearizableReadLoop","detail":"{readStateIndex:652; appliedIndex:652; }","duration":"188.997378ms","start":"2025-10-16T18:30:44.258868Z","end":"2025-10-16T18:30:44.447865Z","steps":["trace[588536919] 'read index received'  (duration: 188.989579ms)","trace[588536919] 'applied index is now lower than readState.Index'  (duration: 6.4µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-16T18:30:44.448110Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.218254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-10-16T18:30:44.448110Z","caller":"traceutil/trace.go:172","msg":"trace[2114708479] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"282.129583ms","start":"2025-10-16T18:30:44.165967Z","end":"2025-10-16T18:30:44.448097Z","steps":["trace[2114708479] 'process raft request'  (duration: 281.962054ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.448144Z","caller":"traceutil/trace.go:172","msg":"trace[1486949101] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:616; }","duration":"189.272208ms","start":"2025-10-16T18:30:44.258863Z","end":"2025-10-16T18:30:44.448136Z","steps":["trace[1486949101] 'agreement among raft nodes before linearized reading'  (duration: 189.084668ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:44.723648Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.01798ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789375803562153 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:530 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-16T18:30:44.724182Z","caller":"traceutil/trace.go:172","msg":"trace[1682595931] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"269.490806ms","start":"2025-10-16T18:30:44.454677Z","end":"2025-10-16T18:30:44.724168Z","steps":["trace[1682595931] 'process raft request'  (duration: 269.268919ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-16T18:30:44.724358Z","caller":"traceutil/trace.go:172","msg":"trace[548296526] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"271.608076ms","start":"2025-10-16T18:30:44.452732Z","end":"2025-10-16T18:30:44.724340Z","steps":["trace[548296526] 'process raft request'  (duration: 131.247438ms)","trace[548296526] 'compare'  (duration: 138.906456ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:44.724399Z","caller":"traceutil/trace.go:172","msg":"trace[2076870443] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"271.677348ms","start":"2025-10-16T18:30:44.452711Z","end":"2025-10-16T18:30:44.724389Z","steps":["trace[2076870443] 'process raft request'  (duration: 271.142545ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-16T18:30:44.894337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.065603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-v85b5\" limit:1 ","response":"range_response_count:1 size:5758"}
	{"level":"info","ts":"2025-10-16T18:30:44.894356Z","caller":"traceutil/trace.go:172","msg":"trace[1767115820] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"162.585028ms","start":"2025-10-16T18:30:44.731752Z","end":"2025-10-16T18:30:44.894337Z","steps":["trace[1767115820] 'process raft request'  (duration: 127.135717ms)","trace[1767115820] 'compare'  (duration: 35.324366ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-16T18:30:44.894395Z","caller":"traceutil/trace.go:172","msg":"trace[490066829] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-v85b5; range_end:; response_count:1; response_revision:620; }","duration":"135.14042ms","start":"2025-10-16T18:30:44.759241Z","end":"2025-10-16T18:30:44.894382Z","steps":["trace[490066829] 'agreement among raft nodes before linearized reading'  (duration: 99.59738ms)","trace[490066829] 'range keys from in-memory index tree'  (duration: 35.368553ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:31:04 up  1:13,  0 user,  load average: 7.01, 3.74, 2.18
	Linux embed-certs-063117 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8594c5daefcc948d0e17138aa8783128805c619d9b989653499c9f82482639b8] <==
	I1016 18:30:12.249319       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:12.249567       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1016 18:30:12.249857       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:12.249879       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:12.249903       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:12.451828       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:12.451884       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:12.451896       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:12.452344       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:13.252583       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:13.252665       1 metrics.go:72] Registering metrics
	I1016 18:30:13.252770       1 controller.go:711] "Syncing nftables rules"
	I1016 18:30:22.451968       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:22.452041       1 main.go:301] handling current node
	I1016 18:30:32.460836       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:32.460891       1 main.go:301] handling current node
	I1016 18:30:42.451916       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:42.451971       1 main.go:301] handling current node
	I1016 18:30:52.454819       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:30:52.454871       1 main.go:301] handling current node
	I1016 18:31:02.460875       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1016 18:31:02.460917       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2beb45b09647681cb2d18ce222e01f57ca8f2532e9f2683c679b5b3bbb182aeb] <==
	I1016 18:30:11.240903       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1016 18:30:11.240910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1016 18:30:11.240912       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1016 18:30:11.240950       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1016 18:30:11.240878       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1016 18:30:11.241550       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:11.241655       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1016 18:30:11.241692       1 aggregator.go:171] initial CRD sync complete...
	I1016 18:30:11.241700       1 autoregister_controller.go:144] Starting autoregister controller
	I1016 18:30:11.241706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1016 18:30:11.241737       1 cache.go:39] Caches are synced for autoregister controller
	E1016 18:30:11.247917       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:11.249368       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:30:11.283658       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:30:11.496825       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:11.530901       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:11.555514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:11.564305       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:11.571369       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:11.614475       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.167.184"}
	I1016 18:30:11.635316       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.178.3"}
	I1016 18:30:12.143680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:14.987922       1 controller.go:667] quota admission added evaluator for: endpoints
	I1016 18:30:15.041541       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:30:15.189410       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [121a4f69e5a4ec28f63e829110167be9cf60003ff5d32b2bdc8c692d0ace2885] <==
	I1016 18:30:14.565691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1016 18:30:14.565836       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1016 18:30:14.565871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1016 18:30:14.567045       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1016 18:30:14.569373       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1016 18:30:14.572571       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1016 18:30:14.574252       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1016 18:30:14.575421       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1016 18:30:14.583004       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1016 18:30:14.583039       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:14.583411       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:14.583448       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1016 18:30:14.583944       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1016 18:30:14.584346       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1016 18:30:14.584472       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:30:14.585035       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1016 18:30:14.585117       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1016 18:30:14.585490       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:30:14.587426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:14.588478       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1016 18:30:14.590771       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1016 18:30:14.590842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:14.593062       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:30:14.595362       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:30:14.614001       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [580a1955626de81ad6bfc45b716b795bbc8c63864a0d9ff99b5baaf1a66027b6] <==
	I1016 18:30:12.005986       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:12.065504       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:12.166182       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:12.166224       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1016 18:30:12.166342       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:12.187257       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:12.187342       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:12.193895       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:12.194385       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:12.194401       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:12.195826       1 config.go:200] "Starting service config controller"
	I1016 18:30:12.195849       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:12.195856       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:12.195864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:12.195830       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:12.195892       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:12.195953       1 config.go:309] "Starting node config controller"
	I1016 18:30:12.195960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:12.195966       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:12.296008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:12.296124       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:30:12.296133       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3e0c4612dffa1aabc4e2f885041d6627f61173da3b7020983a01c437c6a01614] <==
	I1016 18:30:09.933772       1 serving.go:386] Generated self-signed cert in-memory
	W1016 18:30:11.185800       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1016 18:30:11.185873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1016 18:30:11.185888       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1016 18:30:11.185913       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1016 18:30:11.215193       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1016 18:30:11.215322       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:11.219665       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:11.219734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:11.220042       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:11.220097       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1016 18:30:11.319910       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:30:14 embed-certs-063117 kubelet[726]: I1016 18:30:14.155778     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210347     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d1054e3f-7dbb-43e4-8225-5c9a66b292f9-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-g2nfx\" (UID: \"d1054e3f-7dbb-43e4-8225-5c9a66b292f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210412     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7bdl\" (UniqueName: \"kubernetes.io/projected/d1054e3f-7dbb-43e4-8225-5c9a66b292f9-kube-api-access-f7bdl\") pod \"dashboard-metrics-scraper-6ffb444bf9-g2nfx\" (UID: \"d1054e3f-7dbb-43e4-8225-5c9a66b292f9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210482     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9g4\" (UniqueName: \"kubernetes.io/projected/28398c65-3e03-41f6-98a9-0e25b57ac960-kube-api-access-9h9g4\") pod \"kubernetes-dashboard-855c9754f9-tlp4f\" (UID: \"28398c65-3e03-41f6-98a9-0e25b57ac960\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f"
	Oct 16 18:30:15 embed-certs-063117 kubelet[726]: I1016 18:30:15.210522     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/28398c65-3e03-41f6-98a9-0e25b57ac960-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-tlp4f\" (UID: \"28398c65-3e03-41f6-98a9-0e25b57ac960\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f"
	Oct 16 18:30:18 embed-certs-063117 kubelet[726]: I1016 18:30:18.668082     726 scope.go:117] "RemoveContainer" containerID="ab64566d7f710fd583684a54e7f27603f023d63fb20ba75ded92230c36c12027"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: I1016 18:30:19.673977     726 scope.go:117] "RemoveContainer" containerID="ab64566d7f710fd583684a54e7f27603f023d63fb20ba75ded92230c36c12027"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: I1016 18:30:19.674398     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:19 embed-certs-063117 kubelet[726]: E1016 18:30:19.674560     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:20 embed-certs-063117 kubelet[726]: I1016 18:30:20.678993     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:20 embed-certs-063117 kubelet[726]: E1016 18:30:20.679182     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:21 embed-certs-063117 kubelet[726]: I1016 18:30:21.885255     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tlp4f" podStartSLOduration=1.037354618 podStartE2EDuration="6.885228473s" podCreationTimestamp="2025-10-16 18:30:15 +0000 UTC" firstStartedPulling="2025-10-16 18:30:15.511515236 +0000 UTC m=+7.000937062" lastFinishedPulling="2025-10-16 18:30:21.359389092 +0000 UTC m=+12.848810917" observedRunningTime="2025-10-16 18:30:21.699297021 +0000 UTC m=+13.188718867" watchObservedRunningTime="2025-10-16 18:30:21.885228473 +0000 UTC m=+13.374650319"
	Oct 16 18:30:27 embed-certs-063117 kubelet[726]: I1016 18:30:27.964182     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:27 embed-certs-063117 kubelet[726]: E1016 18:30:27.964453     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.615681     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.739234     726 scope.go:117] "RemoveContainer" containerID="c4214726f97c45308f47debdc334ebe11c99a5cb6cae7fcd300adf1d46d73d64"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: I1016 18:30:41.739492     726 scope.go:117] "RemoveContainer" containerID="6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	Oct 16 18:30:41 embed-certs-063117 kubelet[726]: E1016 18:30:41.739692     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:42 embed-certs-063117 kubelet[726]: I1016 18:30:42.743595     726 scope.go:117] "RemoveContainer" containerID="86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213"
	Oct 16 18:30:47 embed-certs-063117 kubelet[726]: I1016 18:30:47.963852     726 scope.go:117] "RemoveContainer" containerID="6343af61af5267179390838bbaf09507511c460d0f16d6487353a3356ee5cb20"
	Oct 16 18:30:47 embed-certs-063117 kubelet[726]: E1016 18:30:47.964117     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-g2nfx_kubernetes-dashboard(d1054e3f-7dbb-43e4-8225-5c9a66b292f9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-g2nfx" podUID="d1054e3f-7dbb-43e4-8225-5c9a66b292f9"
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:30:58 embed-certs-063117 systemd[1]: kubelet.service: Consumed 1.717s CPU time.
	
	
	==> kubernetes-dashboard [44cbf419b1a42e9eb73523f5d588b99db8c45ab77ab1643b0118bfcce5a3f08a] <==
	2025/10/16 18:30:21 Using namespace: kubernetes-dashboard
	2025/10/16 18:30:21 Using in-cluster config to connect to apiserver
	2025/10/16 18:30:21 Using secret token for csrf signing
	2025/10/16 18:30:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:30:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:30:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:30:21 Generating JWE encryption key
	2025/10/16 18:30:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:30:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:30:21 Initializing JWE encryption key from synchronized object
	2025/10/16 18:30:21 Creating in-cluster Sidecar client
	2025/10/16 18:30:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:30:21 Serving insecurely on HTTP port: 9090
	2025/10/16 18:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:30:21 Starting overwatch
	
	
	==> storage-provisioner [86ca4639090df40b57d4d275c7f7d0354df18adeb33f2689643538a67a9a4213] <==
	I1016 18:30:11.980629       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:30:41.985116       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f128332ce9a15a83b597d85035bf0d9574b536f9f0ba19197e4afaa75110ed61] <==
	I1016 18:30:42.805576       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:30:42.813500       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:30:42.813537       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:30:42.816119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:46.272530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:50.533477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:54.132784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:30:57.186151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.211924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.217700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:00.217879       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:31:00.218094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe!
	I1016 18:31:00.218405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e411a7bc-148f-42cf-bac0-dc17cef1cd44", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe became leader
	W1016 18:31:00.222761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:00.228324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:00.318295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-063117_fa7ded68-6061-4680-8e0f-252d37c941fe!
	W1016 18:31:02.242444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:02.248724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:04.254641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:04.264038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063117 -n embed-certs-063117
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-063117 -n embed-certs-063117: exit status 2 (376.82145ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-063117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-523257 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-523257 --alsologtostderr -v=1: exit status 80 (2.030621614s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-523257 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:31:49.400413  296591 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:31:49.400741  296591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:49.400755  296591 out.go:374] Setting ErrFile to fd 2...
	I1016 18:31:49.400761  296591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:49.401081  296591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:31:49.401393  296591 out.go:368] Setting JSON to false
	I1016 18:31:49.401444  296591 mustload.go:65] Loading cluster: default-k8s-diff-port-523257
	I1016 18:31:49.401887  296591 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:49.402324  296591 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523257 --format={{.State.Status}}
	I1016 18:31:49.423662  296591 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:31:49.424084  296591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:49.502208  296591 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-16 18:31:49.489323602 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:31:49.503044  296591 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-523257 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1016 18:31:49.505523  296591 out.go:179] * Pausing node default-k8s-diff-port-523257 ... 
	I1016 18:31:49.507178  296591 host.go:66] Checking if "default-k8s-diff-port-523257" exists ...
	I1016 18:31:49.507585  296591 ssh_runner.go:195] Run: systemctl --version
	I1016 18:31:49.507655  296591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523257
	I1016 18:31:49.533690  296591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/default-k8s-diff-port-523257/id_rsa Username:docker}
	I1016 18:31:49.641929  296591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:31:49.674545  296591 pause.go:52] kubelet running: true
	I1016 18:31:49.674617  296591 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:31:49.894360  296591 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:31:49.894474  296591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:31:49.991515  296591 cri.go:89] found id: "647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b"
	I1016 18:31:49.991537  296591 cri.go:89] found id: "ab2f53987fdb5f62ac2f6ecbf2cad5d434aa5db3641d2794a69fafe85c7ae170"
	I1016 18:31:49.991544  296591 cri.go:89] found id: "e61c60b433b3d2dc3a6ff511f85889007a52b6b282238326838c23b4a470fdf8"
	I1016 18:31:49.991548  296591 cri.go:89] found id: "9b8d270e350203a5340ad6d9042b73e17d91cd1645c28c1832675d24a7810006"
	I1016 18:31:49.991552  296591 cri.go:89] found id: "03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587"
	I1016 18:31:49.991557  296591 cri.go:89] found id: "04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7"
	I1016 18:31:49.991561  296591 cri.go:89] found id: "0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f"
	I1016 18:31:49.991564  296591 cri.go:89] found id: "b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a"
	I1016 18:31:49.991577  296591 cri.go:89] found id: "9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05"
	I1016 18:31:49.991598  296591 cri.go:89] found id: "e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	I1016 18:31:49.991611  296591 cri.go:89] found id: "ea8b339d31e4fb6b38988c306bb020b4436eeba762aa1a960b6697e387d1a153"
	I1016 18:31:49.991615  296591 cri.go:89] found id: ""
	I1016 18:31:49.991661  296591 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:31:50.007615  296591 retry.go:31] will retry after 272.187518ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:50Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:31:50.280179  296591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:31:50.295915  296591 pause.go:52] kubelet running: false
	I1016 18:31:50.295984  296591 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:31:50.494627  296591 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:31:50.494781  296591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:31:50.606376  296591 cri.go:89] found id: "647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b"
	I1016 18:31:50.606402  296591 cri.go:89] found id: "ab2f53987fdb5f62ac2f6ecbf2cad5d434aa5db3641d2794a69fafe85c7ae170"
	I1016 18:31:50.606408  296591 cri.go:89] found id: "e61c60b433b3d2dc3a6ff511f85889007a52b6b282238326838c23b4a470fdf8"
	I1016 18:31:50.606412  296591 cri.go:89] found id: "9b8d270e350203a5340ad6d9042b73e17d91cd1645c28c1832675d24a7810006"
	I1016 18:31:50.606416  296591 cri.go:89] found id: "03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587"
	I1016 18:31:50.606421  296591 cri.go:89] found id: "04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7"
	I1016 18:31:50.606425  296591 cri.go:89] found id: "0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f"
	I1016 18:31:50.606429  296591 cri.go:89] found id: "b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a"
	I1016 18:31:50.606434  296591 cri.go:89] found id: "9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05"
	I1016 18:31:50.606451  296591 cri.go:89] found id: "e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	I1016 18:31:50.606459  296591 cri.go:89] found id: "ea8b339d31e4fb6b38988c306bb020b4436eeba762aa1a960b6697e387d1a153"
	I1016 18:31:50.606463  296591 cri.go:89] found id: ""
	I1016 18:31:50.606514  296591 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:31:50.621794  296591 retry.go:31] will retry after 385.180474ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:50Z" level=error msg="open /run/runc: no such file or directory"
	I1016 18:31:51.007249  296591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:31:51.023460  296591 pause.go:52] kubelet running: false
	I1016 18:31:51.023506  296591 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1016 18:31:51.250755  296591 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1016 18:31:51.250841  296591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1016 18:31:51.345194  296591 cri.go:89] found id: "647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b"
	I1016 18:31:51.345229  296591 cri.go:89] found id: "ab2f53987fdb5f62ac2f6ecbf2cad5d434aa5db3641d2794a69fafe85c7ae170"
	I1016 18:31:51.345235  296591 cri.go:89] found id: "e61c60b433b3d2dc3a6ff511f85889007a52b6b282238326838c23b4a470fdf8"
	I1016 18:31:51.345258  296591 cri.go:89] found id: "9b8d270e350203a5340ad6d9042b73e17d91cd1645c28c1832675d24a7810006"
	I1016 18:31:51.345262  296591 cri.go:89] found id: "03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587"
	I1016 18:31:51.345274  296591 cri.go:89] found id: "04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7"
	I1016 18:31:51.345279  296591 cri.go:89] found id: "0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f"
	I1016 18:31:51.345283  296591 cri.go:89] found id: "b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a"
	I1016 18:31:51.345288  296591 cri.go:89] found id: "9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05"
	I1016 18:31:51.345298  296591 cri.go:89] found id: "e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	I1016 18:31:51.345305  296591 cri.go:89] found id: "ea8b339d31e4fb6b38988c306bb020b4436eeba762aa1a960b6697e387d1a153"
	I1016 18:31:51.345309  296591 cri.go:89] found id: ""
	I1016 18:31:51.345354  296591 ssh_runner.go:195] Run: sudo runc list -f json
	I1016 18:31:51.362769  296591 out.go:203] 
	W1016 18:31:51.364101  296591 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T18:31:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1016 18:31:51.364120  296591 out.go:285] * 
	* 
	W1016 18:31:51.370561  296591 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1016 18:31:51.372169  296591 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-523257 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-523257
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-523257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	        "Created": "2025-10-16T18:29:11.800479319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278386,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:46.146114431Z",
	            "FinishedAt": "2025-10-16T18:30:43.097053859Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hosts",
	        "LogPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0-json.log",
	        "Name": "/default-k8s-diff-port-523257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-523257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-523257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	                "LowerDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-523257",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-523257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-523257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "995af03ecb2973a541b0e9b3911ec2f6e4d5dfcbfa552004ae12e29ceef5157c",
	            "SandboxKey": "/var/run/docker/netns/995af03ecb29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-523257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:b1:5d:27:87:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18ba3d11487252e3067f2e3b5f472d435c8e0f7e30303d875809bd325d5e3e3d",
	                    "EndpointID": "81e993dbf91e95cb698fea8d38c8713fbb65cbaae78f2e4143deb34ba11f6284",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-523257",
	                        "b0bbc4eeeb33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257: exit status 2 (397.116092ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25: (2.53771065s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-084411 sudo systemctl cat kubelet --no-pager                         │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /var/lib/kubelet/config.yaml                         │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl status docker --all --full --no-pager          │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat docker --no-pager                          │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/docker/daemon.json                              │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo docker system info                                       │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl status cri-docker --all --full --no-pager      │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat cri-docker --no-pager                      │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cri-dockerd --version                                    │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl status containerd --all --full --no-pager      │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat containerd --no-pager                      │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /lib/systemd/system/containerd.service               │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/containerd/config.toml                          │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo containerd config dump                                   │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ image   │ default-k8s-diff-port-523257 image list --format=json                        │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ pause   │ -p default-k8s-diff-port-523257 --alsologtostderr -v=1                       │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl status crio --all --full --no-pager            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl cat crio --no-pager                            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo crio config                                              │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ delete  │ -p auto-084411                                                               │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:31:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:31:15.586846  287412 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:31:15.587100  287412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:15.587111  287412 out.go:374] Setting ErrFile to fd 2...
	I1016 18:31:15.587115  287412 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:15.587335  287412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:31:15.587912  287412 out.go:368] Setting JSON to false
	I1016 18:31:15.589151  287412 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4424,"bootTime":1760635052,"procs":402,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:31:15.589239  287412 start.go:141] virtualization: kvm guest
	I1016 18:31:15.591353  287412 out.go:179] * [calico-084411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:31:15.593530  287412 notify.go:220] Checking for updates...
	I1016 18:31:15.593554  287412 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:31:15.595137  287412 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:31:15.596672  287412 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:31:15.598363  287412 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:31:15.599951  287412 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:31:15.601609  287412 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:31:15.603663  287412 config.go:182] Loaded profile config "auto-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:15.603816  287412 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:15.603894  287412 config.go:182] Loaded profile config "kindnet-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:15.604084  287412 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:31:15.631753  287412 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:31:15.631879  287412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:15.705239  287412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-16 18:31:15.692653728 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:31:15.705340  287412 docker.go:318] overlay module found
	I1016 18:31:15.707520  287412 out.go:179] * Using the docker driver based on user configuration
	I1016 18:31:15.708934  287412 start.go:305] selected driver: docker
	I1016 18:31:15.708951  287412 start.go:925] validating driver "docker" against <nil>
	I1016 18:31:15.708966  287412 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:31:15.709609  287412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:15.773432  287412 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-16 18:31:15.762359153 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:31:15.773637  287412 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:31:15.773957  287412 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:31:15.776223  287412 out.go:179] * Using Docker driver with root privileges
	I1016 18:31:15.777670  287412 cni.go:84] Creating CNI manager for "calico"
	I1016 18:31:15.777695  287412 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1016 18:31:15.777799  287412 start.go:349] cluster config:
	{Name:calico-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:31:15.779304  287412 out.go:179] * Starting "calico-084411" primary control-plane node in "calico-084411" cluster
	I1016 18:31:15.781573  287412 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:31:15.783124  287412 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:31:15.784987  287412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:15.785049  287412 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:31:15.785065  287412 cache.go:58] Caching tarball of preloaded images
	I1016 18:31:15.785086  287412 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:31:15.785189  287412 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:31:15.785207  287412 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:31:15.785341  287412 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/config.json ...
	I1016 18:31:15.785372  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/config.json: {Name:mkaa7e2711abc7a1d19b72f959d9291194a7893f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:15.813834  287412 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:31:15.813854  287412 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:31:15.813873  287412 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:31:15.813903  287412 start.go:360] acquireMachinesLock for calico-084411: {Name:mkbc38893e71078f07e46875dc98ed34f3d07173 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:31:15.814044  287412 start.go:364] duration metric: took 123.487µs to acquireMachinesLock for "calico-084411"
	I1016 18:31:15.814077  287412 start.go:93] Provisioning new machine with config: &{Name:calico-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-084411 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:31:15.814148  287412 start.go:125] createHost starting for "" (driver="docker")
	W1016 18:31:12.659785  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:14.661615  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	I1016 18:31:11.434335  276879 addons.go:514] duration metric: took 636.743726ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:31:11.654381  276879 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-084411" context rescaled to 1 replicas
	W1016 18:31:13.154187  276879 node_ready.go:57] node "auto-084411" has "Ready":"False" status (will retry)
	W1016 18:31:15.661569  276879 node_ready.go:57] node "auto-084411" has "Ready":"False" status (will retry)
	I1016 18:31:14.941112  285821 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-084411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.471971423s)
	I1016 18:31:14.941148  285821 kic.go:203] duration metric: took 4.472120351s to extract preloaded images to volume ...
	W1016 18:31:14.941265  285821 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:31:14.941313  285821 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:31:14.941356  285821 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:31:15.003689  285821 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-084411 --name kindnet-084411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-084411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-084411 --network kindnet-084411 --ip 192.168.103.2 --volume kindnet-084411:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 18:31:15.604789  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Running}}
	I1016 18:31:15.626668  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:15.648525  285821 cli_runner.go:164] Run: docker exec kindnet-084411 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:31:15.705072  285821 oci.go:144] the created container "kindnet-084411" has a running status.
	I1016 18:31:15.705108  285821 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa...
	I1016 18:31:16.138526  285821 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:31:16.172544  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:16.193146  285821 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:31:16.193174  285821 kic_runner.go:114] Args: [docker exec --privileged kindnet-084411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:31:16.244643  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:16.267403  285821 machine.go:93] provisionDockerMachine start ...
	I1016 18:31:16.267508  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:16.288453  285821 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:16.288838  285821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1016 18:31:16.288867  285821 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:31:16.448045  285821 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-084411
	
	I1016 18:31:16.448076  285821 ubuntu.go:182] provisioning hostname "kindnet-084411"
	I1016 18:31:16.448134  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:16.470745  285821 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:16.471050  285821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1016 18:31:16.471072  285821 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-084411 && echo "kindnet-084411" | sudo tee /etc/hostname
	I1016 18:31:16.627043  285821 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-084411
	
	I1016 18:31:16.627146  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:16.648986  285821 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:16.649305  285821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1016 18:31:16.649338  285821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-084411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-084411/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-084411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:31:16.791039  285821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:31:16.791068  285821 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:31:16.791127  285821 ubuntu.go:190] setting up certificates
	I1016 18:31:16.791138  285821 provision.go:84] configureAuth start
	I1016 18:31:16.791192  285821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-084411
	I1016 18:31:16.811213  285821 provision.go:143] copyHostCerts
	I1016 18:31:16.811288  285821 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:31:16.811298  285821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:31:16.811366  285821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:31:16.811452  285821 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:31:16.811460  285821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:31:16.811489  285821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:31:16.811547  285821 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:31:16.811554  285821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:31:16.811576  285821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:31:16.811626  285821 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.kindnet-084411 san=[127.0.0.1 192.168.103.2 kindnet-084411 localhost minikube]
	I1016 18:31:17.240314  285821 provision.go:177] copyRemoteCerts
	I1016 18:31:17.240367  285821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:31:17.240409  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:17.260551  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:17.360855  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:31:17.387707  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1016 18:31:17.407510  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:31:17.427046  285821 provision.go:87] duration metric: took 635.895699ms to configureAuth
	I1016 18:31:17.427087  285821 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:31:17.427267  285821 config.go:182] Loaded profile config "kindnet-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:17.427385  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:17.447655  285821 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:17.448106  285821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1016 18:31:17.448153  285821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:31:17.721224  285821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:31:17.721246  285821 machine.go:96] duration metric: took 1.453813252s to provisionDockerMachine
	I1016 18:31:17.721255  285821 client.go:171] duration metric: took 7.887289097s to LocalClient.Create
	I1016 18:31:17.721272  285821 start.go:167] duration metric: took 7.887351254s to libmachine.API.Create "kindnet-084411"
	I1016 18:31:17.721280  285821 start.go:293] postStartSetup for "kindnet-084411" (driver="docker")
	I1016 18:31:17.721292  285821 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:31:17.721348  285821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:31:17.721395  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:17.740235  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:17.844756  285821 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:31:17.848916  285821 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:31:17.848950  285821 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:31:17.848966  285821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:31:17.849047  285821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:31:17.849154  285821 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:31:17.849303  285821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:31:17.858087  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:31:17.881538  285821 start.go:296] duration metric: took 160.24332ms for postStartSetup
	I1016 18:31:17.881912  285821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-084411
	I1016 18:31:17.902314  285821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/config.json ...
	I1016 18:31:17.902578  285821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:31:17.902619  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:17.921963  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:18.019203  285821 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:31:18.024310  285821 start.go:128] duration metric: took 8.192635488s to createHost
	I1016 18:31:18.024339  285821 start.go:83] releasing machines lock for "kindnet-084411", held for 8.192812356s
	I1016 18:31:18.024417  285821 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-084411
	I1016 18:31:18.043575  285821 ssh_runner.go:195] Run: cat /version.json
	I1016 18:31:18.043625  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:18.043658  285821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:31:18.043748  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:18.064219  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:18.064223  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:18.227843  285821 ssh_runner.go:195] Run: systemctl --version
	I1016 18:31:18.235295  285821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:31:18.274156  285821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:31:18.279283  285821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:31:18.279351  285821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:31:18.311236  285821 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:31:18.311274  285821 start.go:495] detecting cgroup driver to use...
	I1016 18:31:18.311313  285821 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:31:18.311365  285821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:31:18.333613  285821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:31:18.347043  285821 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:31:18.347115  285821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:31:18.365698  285821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:31:18.388534  285821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:31:18.485684  285821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:31:18.589294  285821 docker.go:234] disabling docker service ...
	I1016 18:31:18.589368  285821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:31:18.611548  285821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:31:18.626093  285821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:31:18.717157  285821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:31:18.808709  285821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:31:18.823811  285821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:31:18.841737  285821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:31:18.841794  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:18.930755  285821 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:31:18.930820  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:18.996910  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:19.060513  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:19.126245  285821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:31:19.136189  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:19.188188  285821 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:19.253747  285821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:19.277446  285821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:31:19.286145  285821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:31:19.295593  285821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:19.379493  285821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:31:15.816669  287412 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1016 18:31:15.816961  287412 start.go:159] libmachine.API.Create for "calico-084411" (driver="docker")
	I1016 18:31:15.816998  287412 client.go:168] LocalClient.Create starting
	I1016 18:31:15.817103  287412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem
	I1016 18:31:15.817161  287412 main.go:141] libmachine: Decoding PEM data...
	I1016 18:31:15.817185  287412 main.go:141] libmachine: Parsing certificate...
	I1016 18:31:15.817247  287412 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem
	I1016 18:31:15.817276  287412 main.go:141] libmachine: Decoding PEM data...
	I1016 18:31:15.817298  287412 main.go:141] libmachine: Parsing certificate...
	I1016 18:31:15.817653  287412 cli_runner.go:164] Run: docker network inspect calico-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1016 18:31:15.837893  287412 cli_runner.go:211] docker network inspect calico-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1016 18:31:15.837967  287412 network_create.go:284] running [docker network inspect calico-084411] to gather additional debugging logs...
	I1016 18:31:15.837984  287412 cli_runner.go:164] Run: docker network inspect calico-084411
	W1016 18:31:15.859785  287412 cli_runner.go:211] docker network inspect calico-084411 returned with exit code 1
	I1016 18:31:15.859825  287412 network_create.go:287] error running [docker network inspect calico-084411]: docker network inspect calico-084411: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-084411 not found
	I1016 18:31:15.859840  287412 network_create.go:289] output of [docker network inspect calico-084411]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-084411 not found
	
	** /stderr **
	I1016 18:31:15.859985  287412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:31:15.885728  287412 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
	I1016 18:31:15.886975  287412 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9d79ecee39e1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:a0:12:f5:af:3a} reservation:<nil>}
	I1016 18:31:15.887748  287412 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-23b5ade12eda IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:13:e4:8d:c1:04} reservation:<nil>}
	I1016 18:31:15.888580  287412 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e73750}
	I1016 18:31:15.888615  287412 network_create.go:124] attempt to create docker network calico-084411 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1016 18:31:15.888670  287412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-084411 calico-084411
	I1016 18:31:15.963560  287412 network_create.go:108] docker network calico-084411 192.168.76.0/24 created
	I1016 18:31:15.963593  287412 kic.go:121] calculated static IP "192.168.76.2" for the "calico-084411" container
	I1016 18:31:15.963685  287412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1016 18:31:15.989192  287412 cli_runner.go:164] Run: docker volume create calico-084411 --label name.minikube.sigs.k8s.io=calico-084411 --label created_by.minikube.sigs.k8s.io=true
	I1016 18:31:16.010995  287412 oci.go:103] Successfully created a docker volume calico-084411
	I1016 18:31:16.011075  287412 cli_runner.go:164] Run: docker run --rm --name calico-084411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-084411 --entrypoint /usr/bin/test -v calico-084411:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -d /var/lib
	I1016 18:31:16.449513  287412 oci.go:107] Successfully prepared a docker volume calico-084411
	I1016 18:31:16.449558  287412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:16.449582  287412 kic.go:194] Starting extracting preloaded images to volume ...
	I1016 18:31:16.449652  287412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-084411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir
	W1016 18:31:17.158515  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:19.158929  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:18.154835  276879 node_ready.go:57] node "auto-084411" has "Ready":"False" status (will retry)
	W1016 18:31:20.654119  276879 node_ready.go:57] node "auto-084411" has "Ready":"False" status (will retry)
	I1016 18:31:21.140054  285821 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.760526697s)
	I1016 18:31:21.140096  285821 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:31:21.140146  285821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:31:21.144544  285821 start.go:563] Will wait 60s for crictl version
	I1016 18:31:21.144610  285821 ssh_runner.go:195] Run: which crictl
	I1016 18:31:21.149083  285821 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:31:21.181803  285821 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:31:21.181877  285821 ssh_runner.go:195] Run: crio --version
	I1016 18:31:21.214303  285821 ssh_runner.go:195] Run: crio --version
	I1016 18:31:21.251679  285821 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:31:22.153655  276879 node_ready.go:49] node "auto-084411" is "Ready"
	I1016 18:31:22.153688  276879 node_ready.go:38] duration metric: took 11.003079497s for node "auto-084411" to be "Ready" ...
	I1016 18:31:22.153710  276879 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:31:22.153777  276879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:31:22.169353  276879 api_server.go:72] duration metric: took 11.37211601s to wait for apiserver process to appear ...
	I1016 18:31:22.169385  276879 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:31:22.169416  276879 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1016 18:31:22.174196  276879 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1016 18:31:22.175373  276879 api_server.go:141] control plane version: v1.34.1
	I1016 18:31:22.175398  276879 api_server.go:131] duration metric: took 6.007069ms to wait for apiserver health ...
	I1016 18:31:22.175407  276879 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:31:22.178406  276879 system_pods.go:59] 8 kube-system pods found
	I1016 18:31:22.178445  276879 system_pods.go:61] "coredns-66bc5c9577-d6wb4" [9a86dbe2-c21d-4adf-adc1-a8337e3c8107] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:22.178452  276879 system_pods.go:61] "etcd-auto-084411" [65955b7c-83a5-491d-a525-2d006af2870b] Running
	I1016 18:31:22.178458  276879 system_pods.go:61] "kindnet-48bht" [a66e23fa-5f05-433d-aa5e-fcf7fe1ad12c] Running
	I1016 18:31:22.178461  276879 system_pods.go:61] "kube-apiserver-auto-084411" [b907059e-fa55-4c08-bd7b-4e0924e7262b] Running
	I1016 18:31:22.178469  276879 system_pods.go:61] "kube-controller-manager-auto-084411" [f7eb74a0-49f8-46ca-94bd-b63055134213] Running
	I1016 18:31:22.178473  276879 system_pods.go:61] "kube-proxy-ft7bc" [a8bf9953-9293-42ab-a12c-6ee362f56101] Running
	I1016 18:31:22.178479  276879 system_pods.go:61] "kube-scheduler-auto-084411" [c84b3f3e-ba49-4f23-8cbb-8f04e55b974e] Running
	I1016 18:31:22.178484  276879 system_pods.go:61] "storage-provisioner" [51fe6dcc-f165-47d7-887b-6d22608e6d0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:22.178492  276879 system_pods.go:74] duration metric: took 3.080141ms to wait for pod list to return data ...
	I1016 18:31:22.178501  276879 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:31:22.180867  276879 default_sa.go:45] found service account: "default"
	I1016 18:31:22.180899  276879 default_sa.go:55] duration metric: took 2.381263ms for default service account to be created ...
	I1016 18:31:22.180909  276879 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:31:22.183584  276879 system_pods.go:86] 8 kube-system pods found
	I1016 18:31:22.183609  276879 system_pods.go:89] "coredns-66bc5c9577-d6wb4" [9a86dbe2-c21d-4adf-adc1-a8337e3c8107] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:22.183615  276879 system_pods.go:89] "etcd-auto-084411" [65955b7c-83a5-491d-a525-2d006af2870b] Running
	I1016 18:31:22.183621  276879 system_pods.go:89] "kindnet-48bht" [a66e23fa-5f05-433d-aa5e-fcf7fe1ad12c] Running
	I1016 18:31:22.183625  276879 system_pods.go:89] "kube-apiserver-auto-084411" [b907059e-fa55-4c08-bd7b-4e0924e7262b] Running
	I1016 18:31:22.183628  276879 system_pods.go:89] "kube-controller-manager-auto-084411" [f7eb74a0-49f8-46ca-94bd-b63055134213] Running
	I1016 18:31:22.183633  276879 system_pods.go:89] "kube-proxy-ft7bc" [a8bf9953-9293-42ab-a12c-6ee362f56101] Running
	I1016 18:31:22.183636  276879 system_pods.go:89] "kube-scheduler-auto-084411" [c84b3f3e-ba49-4f23-8cbb-8f04e55b974e] Running
	I1016 18:31:22.183641  276879 system_pods.go:89] "storage-provisioner" [51fe6dcc-f165-47d7-887b-6d22608e6d0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:22.183659  276879 retry.go:31] will retry after 190.00605ms: missing components: kube-dns
	I1016 18:31:22.378417  276879 system_pods.go:86] 8 kube-system pods found
	I1016 18:31:22.378453  276879 system_pods.go:89] "coredns-66bc5c9577-d6wb4" [9a86dbe2-c21d-4adf-adc1-a8337e3c8107] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:22.378460  276879 system_pods.go:89] "etcd-auto-084411" [65955b7c-83a5-491d-a525-2d006af2870b] Running
	I1016 18:31:22.378468  276879 system_pods.go:89] "kindnet-48bht" [a66e23fa-5f05-433d-aa5e-fcf7fe1ad12c] Running
	I1016 18:31:22.378472  276879 system_pods.go:89] "kube-apiserver-auto-084411" [b907059e-fa55-4c08-bd7b-4e0924e7262b] Running
	I1016 18:31:22.378477  276879 system_pods.go:89] "kube-controller-manager-auto-084411" [f7eb74a0-49f8-46ca-94bd-b63055134213] Running
	I1016 18:31:22.378483  276879 system_pods.go:89] "kube-proxy-ft7bc" [a8bf9953-9293-42ab-a12c-6ee362f56101] Running
	I1016 18:31:22.378487  276879 system_pods.go:89] "kube-scheduler-auto-084411" [c84b3f3e-ba49-4f23-8cbb-8f04e55b974e] Running
	I1016 18:31:22.378494  276879 system_pods.go:89] "storage-provisioner" [51fe6dcc-f165-47d7-887b-6d22608e6d0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:22.378514  276879 retry.go:31] will retry after 377.423667ms: missing components: kube-dns
	I1016 18:31:22.760678  276879 system_pods.go:86] 8 kube-system pods found
	I1016 18:31:22.760726  276879 system_pods.go:89] "coredns-66bc5c9577-d6wb4" [9a86dbe2-c21d-4adf-adc1-a8337e3c8107] Running
	I1016 18:31:22.760735  276879 system_pods.go:89] "etcd-auto-084411" [65955b7c-83a5-491d-a525-2d006af2870b] Running
	I1016 18:31:22.760741  276879 system_pods.go:89] "kindnet-48bht" [a66e23fa-5f05-433d-aa5e-fcf7fe1ad12c] Running
	I1016 18:31:22.760746  276879 system_pods.go:89] "kube-apiserver-auto-084411" [b907059e-fa55-4c08-bd7b-4e0924e7262b] Running
	I1016 18:31:22.760752  276879 system_pods.go:89] "kube-controller-manager-auto-084411" [f7eb74a0-49f8-46ca-94bd-b63055134213] Running
	I1016 18:31:22.760757  276879 system_pods.go:89] "kube-proxy-ft7bc" [a8bf9953-9293-42ab-a12c-6ee362f56101] Running
	I1016 18:31:22.760763  276879 system_pods.go:89] "kube-scheduler-auto-084411" [c84b3f3e-ba49-4f23-8cbb-8f04e55b974e] Running
	I1016 18:31:22.760768  276879 system_pods.go:89] "storage-provisioner" [51fe6dcc-f165-47d7-887b-6d22608e6d0e] Running
	I1016 18:31:22.760778  276879 system_pods.go:126] duration metric: took 579.861638ms to wait for k8s-apps to be running ...
	I1016 18:31:22.760789  276879 system_svc.go:44] waiting for kubelet service to be running ....
	I1016 18:31:22.760850  276879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:31:22.774894  276879 system_svc.go:56] duration metric: took 14.096089ms WaitForService to wait for kubelet
	I1016 18:31:22.774924  276879 kubeadm.go:586] duration metric: took 11.977691156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:31:22.774950  276879 node_conditions.go:102] verifying NodePressure condition ...
	I1016 18:31:22.778110  276879 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1016 18:31:22.778135  276879 node_conditions.go:123] node cpu capacity is 8
	I1016 18:31:22.778147  276879 node_conditions.go:105] duration metric: took 3.191987ms to run NodePressure ...
	I1016 18:31:22.778158  276879 start.go:241] waiting for startup goroutines ...
	I1016 18:31:22.778165  276879 start.go:246] waiting for cluster config update ...
	I1016 18:31:22.778173  276879 start.go:255] writing updated cluster config ...
	I1016 18:31:22.778490  276879 ssh_runner.go:195] Run: rm -f paused
	I1016 18:31:22.783239  276879 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:31:22.786788  276879 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d6wb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.791923  276879 pod_ready.go:94] pod "coredns-66bc5c9577-d6wb4" is "Ready"
	I1016 18:31:22.791946  276879 pod_ready.go:86] duration metric: took 5.134906ms for pod "coredns-66bc5c9577-d6wb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.794276  276879 pod_ready.go:83] waiting for pod "etcd-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.799350  276879 pod_ready.go:94] pod "etcd-auto-084411" is "Ready"
	I1016 18:31:22.799378  276879 pod_ready.go:86] duration metric: took 5.078874ms for pod "etcd-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.801754  276879 pod_ready.go:83] waiting for pod "kube-apiserver-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.806406  276879 pod_ready.go:94] pod "kube-apiserver-auto-084411" is "Ready"
	I1016 18:31:22.806443  276879 pod_ready.go:86] duration metric: took 4.654082ms for pod "kube-apiserver-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:22.808869  276879 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:23.187245  276879 pod_ready.go:94] pod "kube-controller-manager-auto-084411" is "Ready"
	I1016 18:31:23.187274  276879 pod_ready.go:86] duration metric: took 378.381023ms for pod "kube-controller-manager-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:23.387801  276879 pod_ready.go:83] waiting for pod "kube-proxy-ft7bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:23.787895  276879 pod_ready.go:94] pod "kube-proxy-ft7bc" is "Ready"
	I1016 18:31:23.787926  276879 pod_ready.go:86] duration metric: took 400.096744ms for pod "kube-proxy-ft7bc" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:23.988382  276879 pod_ready.go:83] waiting for pod "kube-scheduler-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:24.387974  276879 pod_ready.go:94] pod "kube-scheduler-auto-084411" is "Ready"
	I1016 18:31:24.388012  276879 pod_ready.go:86] duration metric: took 399.602012ms for pod "kube-scheduler-auto-084411" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:24.388026  276879 pod_ready.go:40] duration metric: took 1.604750673s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:31:24.440489  276879 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:31:24.442410  276879 out.go:179] * Done! kubectl is now configured to use "auto-084411" cluster and "default" namespace by default
	I1016 18:31:21.254490  285821 cli_runner.go:164] Run: docker network inspect kindnet-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:31:21.275345  285821 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1016 18:31:21.280655  285821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:31:21.292493  285821 kubeadm.go:883] updating cluster {Name:kindnet-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:31:21.292631  285821 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:21.292696  285821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:31:21.330340  285821 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:31:21.330361  285821 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:31:21.330409  285821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:31:21.359603  285821 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:31:21.359631  285821 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:31:21.359641  285821 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1016 18:31:21.359766  285821 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-084411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1016 18:31:21.359853  285821 ssh_runner.go:195] Run: crio config
	I1016 18:31:21.412236  285821 cni.go:84] Creating CNI manager for "kindnet"
	I1016 18:31:21.412281  285821 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:31:21.412307  285821 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-084411 NodeName:kindnet-084411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:31:21.412451  285821 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-084411"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:31:21.412540  285821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:31:21.422561  285821 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:31:21.422628  285821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:31:21.432917  285821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1016 18:31:21.448945  285821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:31:21.468004  285821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1016 18:31:21.485930  285821 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:31:21.490738  285821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:31:21.502993  285821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:21.602669  285821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:31:21.628804  285821 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411 for IP: 192.168.103.2
	I1016 18:31:21.628826  285821 certs.go:195] generating shared ca certs ...
	I1016 18:31:21.628845  285821 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:21.629017  285821 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:31:21.629066  285821 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:31:21.629079  285821 certs.go:257] generating profile certs ...
	I1016 18:31:21.629137  285821 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.key
	I1016 18:31:21.629159  285821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.crt with IP's: []
	I1016 18:31:21.972625  285821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.crt ...
	I1016 18:31:21.972656  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.crt: {Name:mk3b6790268dd13be8387a72046ec1ce8c8b6108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:21.972848  285821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.key ...
	I1016 18:31:21.972877  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/client.key: {Name:mk94efe3cfde97599b2597e1d06e683c9cc9493f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:21.973021  285821 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key.169274aa
	I1016 18:31:21.973046  285821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt.169274aa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1016 18:31:22.495039  285821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt.169274aa ...
	I1016 18:31:22.495068  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt.169274aa: {Name:mk8610197c1033686f93ab7345973683b0db2f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:22.495267  285821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key.169274aa ...
	I1016 18:31:22.495285  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key.169274aa: {Name:mkd3f110d257c1592d68524652a62ffdda5170d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:22.495394  285821 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt.169274aa -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt
	I1016 18:31:22.495481  285821 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key.169274aa -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key
	I1016 18:31:22.495548  285821 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.key
	I1016 18:31:22.495565  285821 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.crt with IP's: []
	I1016 18:31:23.347015  285821 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.crt ...
	I1016 18:31:23.347049  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.crt: {Name:mk9d9f6606c8987c57f3d8debecd7f8610f54695 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:23.347235  285821 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.key ...
	I1016 18:31:23.347252  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.key: {Name:mkaf0960335f16f1d2b57149e931e4edd4877d16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:23.347479  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:31:23.347529  285821 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:31:23.347544  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:31:23.347578  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:31:23.347611  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:31:23.347644  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:31:23.347696  285821 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:31:23.348326  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:31:23.368454  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:31:23.389296  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:31:23.409623  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:31:23.428995  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:31:23.448476  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1016 18:31:23.467284  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:31:23.486904  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/kindnet-084411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:31:23.506009  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:31:23.527577  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:31:23.547866  285821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:31:23.566501  285821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:31:23.581937  285821 ssh_runner.go:195] Run: openssl version
	I1016 18:31:23.588591  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:31:23.597891  285821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:31:23.602572  285821 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:31:23.602642  285821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:31:23.639447  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:31:23.650127  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:31:23.660677  285821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:23.665116  285821 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:23.665174  285821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:23.703836  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:31:23.713023  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:31:23.721863  285821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:31:23.725919  285821 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:31:23.725993  285821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:31:23.761964  285821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:31:23.771506  285821 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:31:23.776399  285821 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:31:23.776458  285821 kubeadm.go:400] StartCluster: {Name:kindnet-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:31:23.776537  285821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:31:23.776577  285821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:31:23.806197  285821 cri.go:89] found id: ""
	I1016 18:31:23.806261  285821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:31:23.816422  285821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:31:23.825869  285821 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:31:23.825938  285821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:31:23.834477  285821 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:31:23.834497  285821 kubeadm.go:157] found existing configuration files:
	
	I1016 18:31:23.834551  285821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:31:23.842505  285821 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:31:23.842558  285821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:31:23.850863  285821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:31:23.859139  285821 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:31:23.859205  285821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:31:23.867313  285821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:31:23.875534  285821 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:31:23.875596  285821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:31:23.884083  285821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:31:23.892822  285821 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:31:23.892895  285821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:31:23.900569  285821 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:31:23.966045  285821 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:31:24.041328  285821 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1016 18:31:21.052552  287412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-084411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 -I lz4 -xf /preloaded.tar -C /extractDir: (4.602841292s)
	I1016 18:31:21.052585  287412 kic.go:203] duration metric: took 4.603000429s to extract preloaded images to volume ...
	W1016 18:31:21.052670  287412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1016 18:31:21.052707  287412 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1016 18:31:21.053024  287412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1016 18:31:21.114329  287412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-084411 --name calico-084411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-084411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-084411 --network calico-084411 --ip 192.168.76.2 --volume calico-084411:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225
	I1016 18:31:21.413541  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Running}}
	I1016 18:31:21.435551  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:21.456334  287412 cli_runner.go:164] Run: docker exec calico-084411 stat /var/lib/dpkg/alternatives/iptables
	I1016 18:31:21.508688  287412 oci.go:144] the created container "calico-084411" has a running status.
	I1016 18:31:21.508737  287412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa...
	I1016 18:31:21.943582  287412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1016 18:31:21.970353  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:21.989634  287412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1016 18:31:21.989658  287412 kic_runner.go:114] Args: [docker exec --privileged calico-084411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1016 18:31:22.039285  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:22.058795  287412 machine.go:93] provisionDockerMachine start ...
	I1016 18:31:22.058902  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:22.081933  287412 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:22.082258  287412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1016 18:31:22.082276  287412 main.go:141] libmachine: About to run SSH command:
	hostname
	I1016 18:31:22.226355  287412 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-084411
	
	I1016 18:31:22.226384  287412 ubuntu.go:182] provisioning hostname "calico-084411"
	I1016 18:31:22.226440  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:22.248753  287412 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:22.249063  287412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1016 18:31:22.249086  287412 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-084411 && echo "calico-084411" | sudo tee /etc/hostname
	I1016 18:31:22.409396  287412 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-084411
	
	I1016 18:31:22.409475  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:22.429507  287412 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:22.429772  287412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1016 18:31:22.429802  287412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-084411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-084411/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-084411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1016 18:31:22.573843  287412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1016 18:31:22.574163  287412 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21738-8849/.minikube CaCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21738-8849/.minikube}
	I1016 18:31:22.574197  287412 ubuntu.go:190] setting up certificates
	I1016 18:31:22.574214  287412 provision.go:84] configureAuth start
	I1016 18:31:22.574283  287412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-084411
	I1016 18:31:22.599136  287412 provision.go:143] copyHostCerts
	I1016 18:31:22.599211  287412 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem, removing ...
	I1016 18:31:22.599224  287412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem
	I1016 18:31:22.599299  287412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/ca.pem (1078 bytes)
	I1016 18:31:22.599402  287412 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem, removing ...
	I1016 18:31:22.599412  287412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem
	I1016 18:31:22.599442  287412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/cert.pem (1123 bytes)
	I1016 18:31:22.599512  287412 exec_runner.go:144] found /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem, removing ...
	I1016 18:31:22.599520  287412 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem
	I1016 18:31:22.599543  287412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21738-8849/.minikube/key.pem (1679 bytes)
	I1016 18:31:22.599608  287412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem org=jenkins.calico-084411 san=[127.0.0.1 192.168.76.2 calico-084411 localhost minikube]
	I1016 18:31:22.882196  287412 provision.go:177] copyRemoteCerts
	I1016 18:31:22.882266  287412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1016 18:31:22.882303  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:22.902739  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:23.004106  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1016 18:31:23.028849  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1016 18:31:23.047987  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1016 18:31:23.067027  287412 provision.go:87] duration metric: took 492.798755ms to configureAuth
	I1016 18:31:23.067062  287412 ubuntu.go:206] setting minikube options for container-runtime
	I1016 18:31:23.067243  287412 config.go:182] Loaded profile config "calico-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:23.067346  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:23.086805  287412 main.go:141] libmachine: Using SSH client type: native
	I1016 18:31:23.087031  287412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1016 18:31:23.087055  287412 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1016 18:31:23.344431  287412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1016 18:31:23.344463  287412 machine.go:96] duration metric: took 1.285645674s to provisionDockerMachine
	I1016 18:31:23.344475  287412 client.go:171] duration metric: took 7.527441217s to LocalClient.Create
	I1016 18:31:23.344491  287412 start.go:167] duration metric: took 7.527533557s to libmachine.API.Create "calico-084411"
	I1016 18:31:23.344498  287412 start.go:293] postStartSetup for "calico-084411" (driver="docker")
	I1016 18:31:23.344507  287412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1016 18:31:23.344566  287412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1016 18:31:23.344604  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:23.364884  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:23.467157  287412 ssh_runner.go:195] Run: cat /etc/os-release
	I1016 18:31:23.471047  287412 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1016 18:31:23.471081  287412 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1016 18:31:23.471094  287412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/addons for local assets ...
	I1016 18:31:23.471148  287412 filesync.go:126] Scanning /home/jenkins/minikube-integration/21738-8849/.minikube/files for local assets ...
	I1016 18:31:23.471254  287412 filesync.go:149] local asset: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem -> 123752.pem in /etc/ssl/certs
	I1016 18:31:23.471383  287412 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1016 18:31:23.479567  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:31:23.505112  287412 start.go:296] duration metric: took 160.599724ms for postStartSetup
	I1016 18:31:23.505510  287412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-084411
	I1016 18:31:23.524786  287412 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/config.json ...
	I1016 18:31:23.525043  287412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:31:23.525093  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:23.544671  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:23.641432  287412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1016 18:31:23.646527  287412 start.go:128] duration metric: took 7.832366183s to createHost
	I1016 18:31:23.646555  287412 start.go:83] releasing machines lock for "calico-084411", held for 7.832493302s
	I1016 18:31:23.646613  287412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-084411
	I1016 18:31:23.667550  287412 ssh_runner.go:195] Run: cat /version.json
	I1016 18:31:23.667603  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:23.667616  287412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1016 18:31:23.667685  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:23.688386  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:23.688386  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:23.783655  287412 ssh_runner.go:195] Run: systemctl --version
	I1016 18:31:23.849288  287412 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1016 18:31:23.886446  287412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1016 18:31:23.891130  287412 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1016 18:31:23.891191  287412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1016 18:31:23.918471  287412 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1016 18:31:23.918492  287412 start.go:495] detecting cgroup driver to use...
	I1016 18:31:23.918520  287412 detect.go:190] detected "systemd" cgroup driver on host os
	I1016 18:31:23.918557  287412 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1016 18:31:23.936881  287412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1016 18:31:23.951164  287412 docker.go:218] disabling cri-docker service (if available) ...
	I1016 18:31:23.951231  287412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1016 18:31:23.969995  287412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1016 18:31:23.993272  287412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1016 18:31:24.087035  287412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1016 18:31:24.179569  287412 docker.go:234] disabling docker service ...
	I1016 18:31:24.179627  287412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1016 18:31:24.200822  287412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1016 18:31:24.214449  287412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1016 18:31:24.302210  287412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1016 18:31:24.389554  287412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1016 18:31:24.403864  287412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1016 18:31:24.420164  287412 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1016 18:31:24.420226  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.431451  287412 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1016 18:31:24.431519  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.441073  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.451372  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.464707  287412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1016 18:31:24.475269  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.486080  287412 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.503925  287412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1016 18:31:24.515332  287412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1016 18:31:24.524321  287412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1016 18:31:24.534438  287412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:24.652325  287412 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1016 18:31:24.773586  287412 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1016 18:31:24.773667  287412 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1016 18:31:24.778487  287412 start.go:563] Will wait 60s for crictl version
	I1016 18:31:24.778553  287412 ssh_runner.go:195] Run: which crictl
	I1016 18:31:24.782448  287412 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1016 18:31:24.810882  287412 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1016 18:31:24.810967  287412 ssh_runner.go:195] Run: crio --version
	I1016 18:31:24.845120  287412 ssh_runner.go:195] Run: crio --version
	I1016 18:31:24.882937  287412 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1016 18:31:24.884534  287412 cli_runner.go:164] Run: docker network inspect calico-084411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1016 18:31:24.903029  287412 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1016 18:31:24.907414  287412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:31:24.918946  287412 kubeadm.go:883] updating cluster {Name:calico-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1016 18:31:24.919097  287412 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:24.919158  287412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:31:24.966179  287412 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:31:24.966209  287412 crio.go:433] Images already preloaded, skipping extraction
	I1016 18:31:24.966272  287412 ssh_runner.go:195] Run: sudo crictl images --output json
	I1016 18:31:24.998251  287412 crio.go:514] all images are preloaded for cri-o runtime.
	I1016 18:31:24.998271  287412 cache_images.go:85] Images are preloaded, skipping loading
	I1016 18:31:24.998278  287412 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1016 18:31:24.998377  287412 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-084411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1016 18:31:24.998450  287412 ssh_runner.go:195] Run: crio config
	I1016 18:31:25.047209  287412 cni.go:84] Creating CNI manager for "calico"
	I1016 18:31:25.047246  287412 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1016 18:31:25.047276  287412 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-084411 NodeName:calico-084411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1016 18:31:25.047430  287412 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-084411"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1016 18:31:25.047500  287412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1016 18:31:25.056082  287412 binaries.go:44] Found k8s binaries, skipping transfer
	I1016 18:31:25.056163  287412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1016 18:31:25.064626  287412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1016 18:31:25.078577  287412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1016 18:31:25.094437  287412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1016 18:31:25.108452  287412 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1016 18:31:25.112373  287412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1016 18:31:25.122601  287412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:25.215264  287412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:31:25.238215  287412 certs.go:69] Setting up /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411 for IP: 192.168.76.2
	I1016 18:31:25.238239  287412 certs.go:195] generating shared ca certs ...
	I1016 18:31:25.238258  287412 certs.go:227] acquiring lock for ca certs: {Name:mkebf15a3970a66f77cf66e14f6efeaafcab5e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:25.238425  287412 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key
	I1016 18:31:25.238477  287412 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key
	I1016 18:31:25.238490  287412 certs.go:257] generating profile certs ...
	I1016 18:31:25.238553  287412 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.key
	I1016 18:31:25.238574  287412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.crt with IP's: []
	I1016 18:31:25.370292  287412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.crt ...
	I1016 18:31:25.370323  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.crt: {Name:mk9cd6e380b54a138053498838a3efe1e1eacae7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:25.370503  287412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.key ...
	I1016 18:31:25.370515  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/client.key: {Name:mk14e7e228d929d679d16c160d8829badba8457e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:25.370598  287412 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key.81ae50b2
	I1016 18:31:25.370613  287412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt.81ae50b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1016 18:31:25.509921  287412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt.81ae50b2 ...
	I1016 18:31:25.509947  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt.81ae50b2: {Name:mk73295967e81c1f9d76636378276a95e6614ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:25.510123  287412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key.81ae50b2 ...
	I1016 18:31:25.510140  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key.81ae50b2: {Name:mk97776ef014e9ecba3bf421bacf6edbcc68ab86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:25.510256  287412 certs.go:382] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt.81ae50b2 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt
	I1016 18:31:25.510364  287412 certs.go:386] copying /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key.81ae50b2 -> /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key
	I1016 18:31:25.510459  287412 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.key
	I1016 18:31:25.510481  287412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.crt with IP's: []
	W1016 18:31:21.660832  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:24.158368  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	I1016 18:31:26.877410  287412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.crt ...
	I1016 18:31:26.877437  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.crt: {Name:mk38fa51b3664f5698a7ef883630c32dc3c59669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:26.877574  287412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.key ...
	I1016 18:31:26.877585  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.key: {Name:mk3d72279fb475e92a4253c1552a9cc7ddbd8f1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:26.877766  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem (1338 bytes)
	W1016 18:31:26.877806  287412 certs.go:480] ignoring /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375_empty.pem, impossibly tiny 0 bytes
	I1016 18:31:26.877815  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca-key.pem (1679 bytes)
	I1016 18:31:26.877836  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/ca.pem (1078 bytes)
	I1016 18:31:26.877857  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/cert.pem (1123 bytes)
	I1016 18:31:26.877887  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/certs/key.pem (1679 bytes)
	I1016 18:31:26.877923  287412 certs.go:484] found cert: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem (1708 bytes)
	I1016 18:31:26.878450  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1016 18:31:26.899767  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1016 18:31:26.919028  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1016 18:31:26.940806  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1016 18:31:26.963182  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1016 18:31:26.984022  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1016 18:31:27.003263  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1016 18:31:27.023024  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/calico-084411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1016 18:31:27.041020  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1016 18:31:27.061118  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/certs/12375.pem --> /usr/share/ca-certificates/12375.pem (1338 bytes)
	I1016 18:31:27.079662  287412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/ssl/certs/123752.pem --> /usr/share/ca-certificates/123752.pem (1708 bytes)
	I1016 18:31:27.097064  287412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1016 18:31:27.110463  287412 ssh_runner.go:195] Run: openssl version
	I1016 18:31:27.117232  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12375.pem && ln -fs /usr/share/ca-certificates/12375.pem /etc/ssl/certs/12375.pem"
	I1016 18:31:27.125989  287412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12375.pem
	I1016 18:31:27.130053  287412 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 16 17:49 /usr/share/ca-certificates/12375.pem
	I1016 18:31:27.130164  287412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12375.pem
	I1016 18:31:27.176088  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12375.pem /etc/ssl/certs/51391683.0"
	I1016 18:31:27.187983  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/123752.pem && ln -fs /usr/share/ca-certificates/123752.pem /etc/ssl/certs/123752.pem"
	I1016 18:31:27.198192  287412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/123752.pem
	I1016 18:31:27.202447  287412 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 16 17:49 /usr/share/ca-certificates/123752.pem
	I1016 18:31:27.202507  287412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/123752.pem
	I1016 18:31:27.238171  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/123752.pem /etc/ssl/certs/3ec20f2e.0"
	I1016 18:31:27.247327  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1016 18:31:27.257147  287412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:27.261226  287412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 16 17:44 /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:27.261281  287412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1016 18:31:27.297094  287412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1016 18:31:27.306351  287412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1016 18:31:27.310645  287412 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1016 18:31:27.310701  287412 kubeadm.go:400] StartCluster: {Name:calico-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:31:27.310799  287412 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1016 18:31:27.310858  287412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1016 18:31:27.339002  287412 cri.go:89] found id: ""
	I1016 18:31:27.339072  287412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1016 18:31:27.347462  287412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1016 18:31:27.355419  287412 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1016 18:31:27.355467  287412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1016 18:31:27.363068  287412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1016 18:31:27.363087  287412 kubeadm.go:157] found existing configuration files:
	
	I1016 18:31:27.363140  287412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1016 18:31:27.370828  287412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1016 18:31:27.370876  287412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1016 18:31:27.378711  287412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1016 18:31:27.386788  287412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1016 18:31:27.386863  287412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1016 18:31:27.394707  287412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1016 18:31:27.402898  287412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1016 18:31:27.402961  287412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1016 18:31:27.410925  287412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1016 18:31:27.418856  287412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1016 18:31:27.418911  287412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1016 18:31:27.426559  287412 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1016 18:31:27.494988  287412 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1016 18:31:27.556983  287412 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1016 18:31:26.159860  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:28.658092  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	W1016 18:31:30.659063  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	I1016 18:31:35.233039  285821 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:31:35.233148  285821 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:31:35.233303  285821 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:31:35.233404  285821 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:31:35.233452  285821 kubeadm.go:318] OS: Linux
	I1016 18:31:35.233551  285821 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:31:35.233680  285821 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:31:35.233761  285821 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:31:35.233822  285821 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:31:35.233863  285821 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:31:35.233901  285821 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:31:35.233940  285821 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:31:35.233976  285821 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:31:35.234034  285821 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:31:35.234112  285821 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:31:35.234281  285821 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:31:35.234387  285821 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:31:35.235907  285821 out.go:252]   - Generating certificates and keys ...
	I1016 18:31:35.235972  285821 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:31:35.236056  285821 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:31:35.236171  285821 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:31:35.236259  285821 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:31:35.236339  285821 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:31:35.236425  285821 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:31:35.236497  285821 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:31:35.236601  285821 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-084411 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:31:35.236648  285821 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:31:35.236786  285821 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-084411 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1016 18:31:35.236866  285821 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:31:35.236934  285821 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:31:35.236993  285821 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:31:35.237103  285821 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:31:35.237182  285821 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:31:35.237237  285821 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:31:35.237283  285821 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:31:35.237368  285821 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:31:35.237433  285821 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:31:35.237516  285821 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:31:35.237577  285821 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:31:35.239524  285821 out.go:252]   - Booting up control plane ...
	I1016 18:31:35.239602  285821 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:31:35.239681  285821 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:31:35.239764  285821 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:31:35.239860  285821 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:31:35.239956  285821 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:31:35.240049  285821 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:31:35.240135  285821 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:31:35.240167  285821 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:31:35.240289  285821 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:31:35.240371  285821 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:31:35.240431  285821 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001576768s
	I1016 18:31:35.240535  285821 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:31:35.240636  285821 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1016 18:31:35.240781  285821 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:31:35.240859  285821 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:31:35.240940  285821 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.402612187s
	I1016 18:31:35.241016  285821 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.736838116s
	I1016 18:31:35.241126  285821 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502218823s
	I1016 18:31:35.241299  285821 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:31:35.241451  285821 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:31:35.241514  285821 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:31:35.241691  285821 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-084411 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:31:35.241770  285821 kubeadm.go:318] [bootstrap-token] Using token: va9vwo.uwjwxh80q7mwx92v
	I1016 18:31:35.243108  285821 out.go:252]   - Configuring RBAC rules ...
	I1016 18:31:35.243195  285821 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:31:35.243316  285821 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:31:35.243475  285821 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:31:35.243618  285821 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:31:35.243774  285821 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:31:35.243888  285821 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:31:35.244017  285821 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:31:35.244082  285821 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:31:35.244135  285821 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:31:35.244144  285821 kubeadm.go:318] 
	I1016 18:31:35.244214  285821 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:31:35.244222  285821 kubeadm.go:318] 
	I1016 18:31:35.244307  285821 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:31:35.244323  285821 kubeadm.go:318] 
	I1016 18:31:35.244357  285821 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:31:35.244451  285821 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:31:35.244543  285821 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:31:35.244551  285821 kubeadm.go:318] 
	I1016 18:31:35.244621  285821 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:31:35.244631  285821 kubeadm.go:318] 
	I1016 18:31:35.244688  285821 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:31:35.244697  285821 kubeadm.go:318] 
	I1016 18:31:35.244791  285821 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:31:35.244916  285821 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:31:35.245016  285821 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:31:35.245026  285821 kubeadm.go:318] 
	I1016 18:31:35.245127  285821 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:31:35.245235  285821 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:31:35.245242  285821 kubeadm.go:318] 
	I1016 18:31:35.245308  285821 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token va9vwo.uwjwxh80q7mwx92v \
	I1016 18:31:35.245393  285821 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:31:35.245417  285821 kubeadm.go:318] 	--control-plane 
	I1016 18:31:35.245423  285821 kubeadm.go:318] 
	I1016 18:31:35.245504  285821 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:31:35.245514  285821 kubeadm.go:318] 
	I1016 18:31:35.245585  285821 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token va9vwo.uwjwxh80q7mwx92v \
	I1016 18:31:35.245679  285821 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:31:35.245699  285821 cni.go:84] Creating CNI manager for "kindnet"
	I1016 18:31:35.247609  285821 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1016 18:31:33.159152  277941 pod_ready.go:104] pod "coredns-66bc5c9577-jx8q2" is not "Ready", error: <nil>
	I1016 18:31:35.159095  277941 pod_ready.go:94] pod "coredns-66bc5c9577-jx8q2" is "Ready"
	I1016 18:31:35.159127  277941 pod_ready.go:86] duration metric: took 38.006491422s for pod "coredns-66bc5c9577-jx8q2" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.162088  277941 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.165623  277941 pod_ready.go:94] pod "etcd-default-k8s-diff-port-523257" is "Ready"
	I1016 18:31:35.165644  277941 pod_ready.go:86] duration metric: took 3.537392ms for pod "etcd-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.167421  277941 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.171008  277941 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-523257" is "Ready"
	I1016 18:31:35.171030  277941 pod_ready.go:86] duration metric: took 3.583468ms for pod "kube-apiserver-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.172837  277941 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.356094  277941 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-523257" is "Ready"
	I1016 18:31:35.356124  277941 pod_ready.go:86] duration metric: took 183.267712ms for pod "kube-controller-manager-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.556938  277941 pod_ready.go:83] waiting for pod "kube-proxy-hrdcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:35.955708  277941 pod_ready.go:94] pod "kube-proxy-hrdcg" is "Ready"
	I1016 18:31:35.955754  277941 pod_ready.go:86] duration metric: took 398.783003ms for pod "kube-proxy-hrdcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:36.156152  277941 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:36.555834  277941 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-523257" is "Ready"
	I1016 18:31:36.555862  277941 pod_ready.go:86] duration metric: took 399.682415ms for pod "kube-scheduler-default-k8s-diff-port-523257" in "kube-system" namespace to be "Ready" or be gone ...
	I1016 18:31:36.555876  277941 pod_ready.go:40] duration metric: took 39.406475475s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1016 18:31:36.604375  277941 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1016 18:31:36.606273  277941 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-523257" cluster and "default" namespace by default
	I1016 18:31:37.714979  287412 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1016 18:31:37.715068  287412 kubeadm.go:318] [preflight] Running pre-flight checks
	I1016 18:31:37.715167  287412 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1016 18:31:37.715213  287412 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1016 18:31:37.715253  287412 kubeadm.go:318] OS: Linux
	I1016 18:31:37.715296  287412 kubeadm.go:318] CGROUPS_CPU: enabled
	I1016 18:31:37.715349  287412 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1016 18:31:37.715396  287412 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1016 18:31:37.715438  287412 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1016 18:31:37.715481  287412 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1016 18:31:37.715522  287412 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1016 18:31:37.715571  287412 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1016 18:31:37.715615  287412 kubeadm.go:318] CGROUPS_IO: enabled
	I1016 18:31:37.715764  287412 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1016 18:31:37.715932  287412 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1016 18:31:37.716065  287412 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1016 18:31:37.716148  287412 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1016 18:31:37.718851  287412 out.go:252]   - Generating certificates and keys ...
	I1016 18:31:37.718925  287412 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1016 18:31:37.719017  287412 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1016 18:31:37.719090  287412 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1016 18:31:37.719141  287412 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1016 18:31:37.719212  287412 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1016 18:31:37.719286  287412 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1016 18:31:37.719359  287412 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1016 18:31:37.719522  287412 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-084411 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 18:31:37.719605  287412 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1016 18:31:37.719746  287412 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-084411 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1016 18:31:37.719936  287412 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1016 18:31:37.720094  287412 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1016 18:31:37.720160  287412 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1016 18:31:37.720213  287412 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1016 18:31:37.720260  287412 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1016 18:31:37.720310  287412 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1016 18:31:37.720362  287412 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1016 18:31:37.720454  287412 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1016 18:31:37.720516  287412 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1016 18:31:37.720591  287412 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1016 18:31:37.720657  287412 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1016 18:31:37.722106  287412 out.go:252]   - Booting up control plane ...
	I1016 18:31:37.722188  287412 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1016 18:31:37.722280  287412 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1016 18:31:37.722361  287412 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1016 18:31:37.722468  287412 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1016 18:31:37.722553  287412 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1016 18:31:37.722647  287412 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1016 18:31:37.722758  287412 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1016 18:31:37.722795  287412 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1016 18:31:37.722908  287412 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1016 18:31:37.723010  287412 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1016 18:31:37.723065  287412 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.851177ms
	I1016 18:31:37.723151  287412 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1016 18:31:37.723228  287412 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1016 18:31:37.723298  287412 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1016 18:31:37.723361  287412 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1016 18:31:37.723435  287412 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.278639269s
	I1016 18:31:37.723494  287412 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.535146791s
	I1016 18:31:37.723550  287412 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501821505s
	I1016 18:31:37.723636  287412 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1016 18:31:37.723801  287412 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1016 18:31:37.723874  287412 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1016 18:31:37.724041  287412 kubeadm.go:318] [mark-control-plane] Marking the node calico-084411 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1016 18:31:37.724090  287412 kubeadm.go:318] [bootstrap-token] Using token: ifgf75.n4gn7k6xssymsgi9
	I1016 18:31:37.726235  287412 out.go:252]   - Configuring RBAC rules ...
	I1016 18:31:37.726328  287412 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1016 18:31:37.726399  287412 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1016 18:31:37.726562  287412 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1016 18:31:37.726790  287412 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1016 18:31:37.726944  287412 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1016 18:31:37.727066  287412 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1016 18:31:37.727219  287412 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1016 18:31:37.727281  287412 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1016 18:31:37.727338  287412 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1016 18:31:37.727348  287412 kubeadm.go:318] 
	I1016 18:31:37.727408  287412 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1016 18:31:37.727421  287412 kubeadm.go:318] 
	I1016 18:31:37.727489  287412 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1016 18:31:37.727497  287412 kubeadm.go:318] 
	I1016 18:31:37.727518  287412 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1016 18:31:37.727566  287412 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1016 18:31:37.727614  287412 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1016 18:31:37.727620  287412 kubeadm.go:318] 
	I1016 18:31:37.727675  287412 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1016 18:31:37.727680  287412 kubeadm.go:318] 
	I1016 18:31:37.727734  287412 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1016 18:31:37.727740  287412 kubeadm.go:318] 
	I1016 18:31:37.727783  287412 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1016 18:31:37.727850  287412 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1016 18:31:37.727916  287412 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1016 18:31:37.727925  287412 kubeadm.go:318] 
	I1016 18:31:37.727995  287412 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1016 18:31:37.728064  287412 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1016 18:31:37.728070  287412 kubeadm.go:318] 
	I1016 18:31:37.728139  287412 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ifgf75.n4gn7k6xssymsgi9 \
	I1016 18:31:37.728227  287412 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c \
	I1016 18:31:37.728247  287412 kubeadm.go:318] 	--control-plane 
	I1016 18:31:37.728252  287412 kubeadm.go:318] 
	I1016 18:31:37.728327  287412 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1016 18:31:37.728333  287412 kubeadm.go:318] 
	I1016 18:31:37.728400  287412 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ifgf75.n4gn7k6xssymsgi9 \
	I1016 18:31:37.728504  287412 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:40455f776e8d1f62b7cd83fc9465cbaab8b3feb140906f470c4c76b42359b35c 
	I1016 18:31:37.728515  287412 cni.go:84] Creating CNI manager for "calico"
	I1016 18:31:37.729964  287412 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1016 18:31:35.248961  285821 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1016 18:31:35.253639  285821 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:31:35.253658  285821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1016 18:31:35.268142  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:31:35.541674  285821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:31:35.541790  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:35.541963  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-084411 minikube.k8s.io/updated_at=2025_10_16T18_31_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=kindnet-084411 minikube.k8s.io/primary=true
	I1016 18:31:35.635656  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:35.647005  285821 ops.go:34] apiserver oom_adj: -16
	I1016 18:31:36.135685  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:36.635934  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:37.136442  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:37.635924  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:38.135932  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:38.635807  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:39.135875  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:39.636117  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:40.135856  285821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:40.212179  285821 kubeadm.go:1113] duration metric: took 4.670572012s to wait for elevateKubeSystemPrivileges
	I1016 18:31:40.212227  285821 kubeadm.go:402] duration metric: took 16.435772357s to StartCluster
	I1016 18:31:40.212250  285821 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:40.212327  285821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:31:40.213774  285821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:40.214041  285821 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:31:40.214068  285821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:31:40.214144  285821 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:31:40.214248  285821 addons.go:69] Setting storage-provisioner=true in profile "kindnet-084411"
	I1016 18:31:40.214275  285821 addons.go:238] Setting addon storage-provisioner=true in "kindnet-084411"
	I1016 18:31:40.214280  285821 config.go:182] Loaded profile config "kindnet-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:40.214310  285821 host.go:66] Checking if "kindnet-084411" exists ...
	I1016 18:31:40.214265  285821 addons.go:69] Setting default-storageclass=true in profile "kindnet-084411"
	I1016 18:31:40.214332  285821 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-084411"
	I1016 18:31:40.214686  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:40.214891  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:40.216507  285821 out.go:179] * Verifying Kubernetes components...
	I1016 18:31:40.217951  285821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:40.240849  285821 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:31:37.732775  287412 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1016 18:31:37.732796  287412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1016 18:31:37.748199  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1016 18:31:38.638098  287412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1016 18:31:38.638128  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:38.638166  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-084411 minikube.k8s.io/updated_at=2025_10_16T18_31_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf minikube.k8s.io/name=calico-084411 minikube.k8s.io/primary=true
	I1016 18:31:38.729643  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:38.748080  287412 ops.go:34] apiserver oom_adj: -16
	I1016 18:31:39.229833  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:39.729861  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:40.230382  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:40.242078  285821 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:31:40.242101  285821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:31:40.242159  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:40.242470  285821 addons.go:238] Setting addon default-storageclass=true in "kindnet-084411"
	I1016 18:31:40.242512  285821 host.go:66] Checking if "kindnet-084411" exists ...
	I1016 18:31:40.244011  285821 cli_runner.go:164] Run: docker container inspect kindnet-084411 --format={{.State.Status}}
	I1016 18:31:40.279209  285821 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:31:40.279234  285821 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:31:40.279299  285821 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-084411
	I1016 18:31:40.280171  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:40.309067  285821 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/kindnet-084411/id_rsa Username:docker}
	I1016 18:31:40.345553  285821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:31:40.397900  285821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:31:40.410931  285821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:31:40.437631  285821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:31:40.557275  285821 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1016 18:31:40.558676  285821 node_ready.go:35] waiting up to 15m0s for node "kindnet-084411" to be "Ready" ...
	I1016 18:31:40.757298  285821 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:31:40.730227  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:41.230114  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:41.729848  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:42.230551  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:42.730074  287412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1016 18:31:42.808121  287412 kubeadm.go:1113] duration metric: took 4.170041676s to wait for elevateKubeSystemPrivileges
	I1016 18:31:42.808162  287412 kubeadm.go:402] duration metric: took 15.497463705s to StartCluster
	I1016 18:31:42.808183  287412 settings.go:142] acquiring lock: {Name:mkc6b45fa02e5ff3d6715ed7bc469b5fca7072e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:42.808284  287412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:31:42.810373  287412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/kubeconfig: {Name:mkbb149216fe03689cd9ddb29853fa60fb9bd069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:42.810593  287412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1016 18:31:42.810609  287412 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:31:42.810702  287412 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1016 18:31:42.810810  287412 addons.go:69] Setting storage-provisioner=true in profile "calico-084411"
	I1016 18:31:42.810820  287412 config.go:182] Loaded profile config "calico-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:42.810858  287412 addons.go:238] Setting addon storage-provisioner=true in "calico-084411"
	I1016 18:31:42.810858  287412 addons.go:69] Setting default-storageclass=true in profile "calico-084411"
	I1016 18:31:42.810884  287412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-084411"
	I1016 18:31:42.810907  287412 host.go:66] Checking if "calico-084411" exists ...
	I1016 18:31:42.811305  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:42.811535  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:42.813192  287412 out.go:179] * Verifying Kubernetes components...
	I1016 18:31:42.814562  287412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1016 18:31:42.840190  287412 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1016 18:31:42.840825  287412 addons.go:238] Setting addon default-storageclass=true in "calico-084411"
	I1016 18:31:42.840869  287412 host.go:66] Checking if "calico-084411" exists ...
	I1016 18:31:42.841392  287412 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:31:42.841411  287412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1016 18:31:42.841451  287412 cli_runner.go:164] Run: docker container inspect calico-084411 --format={{.State.Status}}
	I1016 18:31:42.841461  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:42.874223  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:42.874558  287412 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1016 18:31:42.874581  287412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1016 18:31:42.874635  287412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-084411
	I1016 18:31:42.905650  287412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/calico-084411/id_rsa Username:docker}
	I1016 18:31:42.917569  287412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1016 18:31:43.004533  287412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1016 18:31:43.017058  287412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1016 18:31:43.035263  287412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1016 18:31:43.131235  287412 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1016 18:31:43.131490  287412 node_ready.go:35] waiting up to 15m0s for node "calico-084411" to be "Ready" ...
	I1016 18:31:43.453482  287412 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1016 18:31:40.759140  285821 addons.go:514] duration metric: took 544.991765ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:31:41.061339  285821 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-084411" context rescaled to 1 replicas
	W1016 18:31:42.561606  285821 node_ready.go:57] node "kindnet-084411" has "Ready":"False" status (will retry)
	W1016 18:31:44.562208  285821 node_ready.go:57] node "kindnet-084411" has "Ready":"False" status (will retry)
	I1016 18:31:43.455643  287412 addons.go:514] duration metric: took 644.938637ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1016 18:31:43.635562  287412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-084411" context rescaled to 1 replicas
	W1016 18:31:45.135750  287412 node_ready.go:57] node "calico-084411" has "Ready":"False" status (will retry)
	W1016 18:31:46.562518  285821 node_ready.go:57] node "kindnet-084411" has "Ready":"False" status (will retry)
	W1016 18:31:48.564063  285821 node_ready.go:57] node "kindnet-084411" has "Ready":"False" status (will retry)
	I1016 18:31:47.135679  287412 node_ready.go:49] node "calico-084411" is "Ready"
	I1016 18:31:47.135732  287412 node_ready.go:38] duration metric: took 4.004164821s for node "calico-084411" to be "Ready" ...
	I1016 18:31:47.135748  287412 api_server.go:52] waiting for apiserver process to appear ...
	I1016 18:31:47.135802  287412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:31:47.149093  287412 api_server.go:72] duration metric: took 4.338453612s to wait for apiserver process to appear ...
	I1016 18:31:47.149124  287412 api_server.go:88] waiting for apiserver healthz status ...
	I1016 18:31:47.149152  287412 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1016 18:31:47.155240  287412 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1016 18:31:47.156573  287412 api_server.go:141] control plane version: v1.34.1
	I1016 18:31:47.156602  287412 api_server.go:131] duration metric: took 7.466394ms to wait for apiserver health ...
	I1016 18:31:47.156612  287412 system_pods.go:43] waiting for kube-system pods to appear ...
	I1016 18:31:47.160299  287412 system_pods.go:59] 9 kube-system pods found
	I1016 18:31:47.160340  287412 system_pods.go:61] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:47.160354  287412 system_pods.go:61] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:47.160365  287412 system_pods.go:61] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:47.160380  287412 system_pods.go:61] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:47.160389  287412 system_pods.go:61] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:47.160398  287412 system_pods.go:61] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:47.160404  287412 system_pods.go:61] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:47.160413  287412 system_pods.go:61] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:47.160420  287412 system_pods.go:61] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:47.160432  287412 system_pods.go:74] duration metric: took 3.812886ms to wait for pod list to return data ...
	I1016 18:31:47.160447  287412 default_sa.go:34] waiting for default service account to be created ...
	I1016 18:31:47.165178  287412 default_sa.go:45] found service account: "default"
	I1016 18:31:47.165204  287412 default_sa.go:55] duration metric: took 4.750083ms for default service account to be created ...
	I1016 18:31:47.165215  287412 system_pods.go:116] waiting for k8s-apps to be running ...
	I1016 18:31:47.169777  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:47.169813  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:47.169827  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:47.169838  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:47.169844  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:47.169852  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:47.169862  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:47.169870  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:47.169876  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:47.169883  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:47.169909  287412 retry.go:31] will retry after 229.079716ms: missing components: kube-dns
	I1016 18:31:47.403662  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:47.403692  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:47.403700  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:47.403706  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:47.403710  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:47.403733  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:47.403738  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:47.403744  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:47.403750  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:47.403756  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:47.403772  287412 retry.go:31] will retry after 322.575153ms: missing components: kube-dns
	I1016 18:31:47.730902  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:47.730941  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:47.730953  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:47.730963  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:47.730971  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:47.731000  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:47.731010  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:47.731016  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:47.731025  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:47.731035  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1016 18:31:47.731062  287412 retry.go:31] will retry after 421.956302ms: missing components: kube-dns
	I1016 18:31:48.157651  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:48.157693  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:48.157706  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:48.157739  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:48.157745  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:48.157754  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:48.157760  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:48.157766  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:48.157773  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:48.157779  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Running
	I1016 18:31:48.157796  287412 retry.go:31] will retry after 377.323327ms: missing components: kube-dns
	I1016 18:31:48.545761  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:48.545803  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:48.545831  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:48.545841  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:48.545849  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:48.545897  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:48.545903  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:48.545909  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:48.545915  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:48.545922  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Running
	I1016 18:31:48.545940  287412 retry.go:31] will retry after 702.445851ms: missing components: kube-dns
	I1016 18:31:49.252420  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:49.252453  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:49.252464  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:49.252485  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:49.252490  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:49.252499  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1016 18:31:49.252506  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:49.252513  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:49.252523  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:49.252529  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Running
	I1016 18:31:49.252547  287412 retry.go:31] will retry after 916.557011ms: missing components: kube-dns
	I1016 18:31:50.173479  287412 system_pods.go:86] 9 kube-system pods found
	I1016 18:31:50.173517  287412 system_pods.go:89] "calico-kube-controllers-59556d9b4c-72x57" [98425c9c-ab6b-4368-b6fb-1417b7f7d60e] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1016 18:31:50.173528  287412 system_pods.go:89] "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1016 18:31:50.173539  287412 system_pods.go:89] "coredns-66bc5c9577-9w54r" [62535d84-211f-4eca-8707-28ab37a058aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1016 18:31:50.173547  287412 system_pods.go:89] "etcd-calico-084411" [c6be6738-a900-4b87-b302-6b7498bcff7a] Running
	I1016 18:31:50.173554  287412 system_pods.go:89] "kube-apiserver-calico-084411" [381b037a-3a29-4c5c-a1d5-a0cad983d84f] Running
	I1016 18:31:50.173561  287412 system_pods.go:89] "kube-controller-manager-calico-084411" [90577057-8e5f-4be2-b712-5f2b4f82e93a] Running
	I1016 18:31:50.173570  287412 system_pods.go:89] "kube-proxy-hgvtf" [4d097fad-b98d-4b1e-b952-6126a0df57d8] Running
	I1016 18:31:50.173577  287412 system_pods.go:89] "kube-scheduler-calico-084411" [97c2ff31-0b34-47f4-85f5-8bcdd6f05264] Running
	I1016 18:31:50.173583  287412 system_pods.go:89] "storage-provisioner" [e7743cd7-a03b-48a3-8a47-8c81ad70201a] Running
	I1016 18:31:50.173601  287412 retry.go:31] will retry after 815.755896ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Oct 16 18:31:14 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:14.178122805Z" level=info msg="Started container" PID=1721 containerID=ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper id=a5065fa4-daaf-4f0d-80f3-eafe69f284f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=43bab1f94f3047820e7dca06866a5d239eaaa3050c0fa2545f35cc236f3f24ce
	Oct 16 18:31:15 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:15.100081349Z" level=info msg="Removing container: 583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316" id=9a3e9470-eb5d-4ab6-817e-93032447c499 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:15 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:15.113988974Z" level=info msg="Removed container 583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=9a3e9470-eb5d-4ab6-817e-93032447c499 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.136613632Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8cbcb1fd-022f-43d7-aa12-5dae0717f3cd name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.137593892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=05fe5392-f90d-40e0-a59b-2b640961645f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.13874215Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=37328d1c-a130-49fe-8afe-9377052ea740 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.139015483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.143826404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.143992427Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca2c74be7a7b14ea6b3cfbb63ed8ab26a5f807ad3645783da2d585b327e8aba1/merged/etc/passwd: no such file or directory"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.144019932Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca2c74be7a7b14ea6b3cfbb63ed8ab26a5f807ad3645783da2d585b327e8aba1/merged/etc/group: no such file or directory"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.144258997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.177174391Z" level=info msg="Created container 647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b: kube-system/storage-provisioner/storage-provisioner" id=37328d1c-a130-49fe-8afe-9377052ea740 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.177874228Z" level=info msg="Starting container: 647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b" id=5be7fd15-5ff5-44c5-bdcd-751ab3e982ce name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.18019666Z" level=info msg="Started container" PID=1735 containerID=647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b description=kube-system/storage-provisioner/storage-provisioner id=5be7fd15-5ff5-44c5-bdcd-751ab3e982ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8845965dad145a352b172550b7a92545e62bb5e151ee5d68643864bd5a72862
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.00019735Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=814dbded-4e45-4786-842e-cd1fd54d29de name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.001362649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9e2bd71-1c80-4e13-a0f2-1847af01ecd2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.002434454Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=f88347e8-c520-4de7-bc80-277b43b36725 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.002709475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.010283208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.010924613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.051984114Z" level=info msg="Created container e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=f88347e8-c520-4de7-bc80-277b43b36725 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.052797473Z" level=info msg="Starting container: e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92" id=15922e51-57de-4c03-890e-8cdc96581c3c name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.055143034Z" level=info msg="Started container" PID=1770 containerID=e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper id=15922e51-57de-4c03-890e-8cdc96581c3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=43bab1f94f3047820e7dca06866a5d239eaaa3050c0fa2545f35cc236f3f24ce
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.17697885Z" level=info msg="Removing container: ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9" id=b2f8a20b-c5f3-4a4b-b473-5169d8420f98 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.189851579Z" level=info msg="Removed container ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=b2f8a20b-c5f3-4a4b-b473-5169d8420f98 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e78a709a1e982       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   43bab1f94f304       dashboard-metrics-scraper-6ffb444bf9-fwnm9             kubernetes-dashboard
	647cec4bbb472       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   f8845965dad14       storage-provisioner                                    kube-system
	ea8b339d31e4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago       Running             kubernetes-dashboard        0                   3c2b952fddade       kubernetes-dashboard-855c9754f9-h7jqr                  kubernetes-dashboard
	ab2f53987fdb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   895decb37c6c0       coredns-66bc5c9577-jx8q2                               kube-system
	7f8d671cdb996       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   d9d083348b954       busybox                                                default
	e61c60b433b3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   0f14e38265cf6       kube-proxy-hrdcg                                       kube-system
	9b8d270e35020       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   5a5042c48155a       kindnet-bctzw                                          kube-system
	03a3db6c20e6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   f8845965dad14       storage-provisioner                                    kube-system
	04779c28f1cb8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   da877dd27a4f3       etcd-default-k8s-diff-port-523257                      kube-system
	0b66af6e1e6d7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   0fabcffaf1a89       kube-apiserver-default-k8s-diff-port-523257            kube-system
	b18e9cf1502f7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   239a7828b70dc       kube-scheduler-default-k8s-diff-port-523257            kube-system
	9b2c049fb89ee       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   20f55313e4a03       kube-controller-manager-default-k8s-diff-port-523257   kube-system
	
	
	==> coredns [ab2f53987fdb5f62ac2f6ecbf2cad5d434aa5db3641d2794a69fafe85c7ae170] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46364 - 4761 "HINFO IN 4538672824164964382.2473203377410024508. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027320412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-523257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-523257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-523257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-523257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-523257
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                dd8663d9-3eb1-4047-bb84-b123d51b045c
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-jx8q2                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-523257                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m27s
	  kube-system                 kindnet-bctzw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-523257             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-523257    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-hrdcg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-523257             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fwnm9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h7jqr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m20s              kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m26s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m26s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m22s              node-controller  Node default-k8s-diff-port-523257 event: Registered Node default-k8s-diff-port-523257 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-523257 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-523257 event: Registered Node default-k8s-diff-port-523257 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7] <==
	{"level":"warn","ts":"2025-10-16T18:30:54.787048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.795065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.805172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.814318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.822220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.829239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.835866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.843482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.850867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.857520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.865560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.872568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.880494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.892660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.900111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.907223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.913667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.921920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.929029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.936998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.944748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.951347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.965624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.973787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.982191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57746","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:31:53 up  1:14,  0 user,  load average: 8.02, 4.73, 2.61
	Linux default-k8s-diff-port-523257 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b8d270e350203a5340ad6d9042b73e17d91cd1645c28c1832675d24a7810006] <==
	I1016 18:30:56.546966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:56.547196       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 18:30:56.547344       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:56.547362       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:56.547388       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:56.843374       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:56.942408       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:56.942857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:56.943680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:57.243793       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:57.243822       1 metrics.go:72] Registering metrics
	I1016 18:30:57.243892       1 controller.go:711] "Syncing nftables rules"
	I1016 18:31:06.843788       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:06.843863       1 main.go:301] handling current node
	I1016 18:31:16.843824       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:16.843871       1 main.go:301] handling current node
	I1016 18:31:26.843780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:26.843816       1 main.go:301] handling current node
	I1016 18:31:36.843807       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:36.843864       1 main.go:301] handling current node
	I1016 18:31:46.843623       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:46.843665       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f] <==
	I1016 18:30:55.604395       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:30:55.608991       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 18:30:55.612689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:55.652141       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:30:55.665588       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:30:55.665623       1 policy_source.go:240] refreshing policies
	I1016 18:30:55.681067       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:55.681113       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:30:55.681350       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:30:55.684457       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 18:30:55.684483       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:30:55.684932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:30:55.691428       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:30:55.898616       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:55.927965       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:55.952443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:55.959873       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:55.969085       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:56.023498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.76.171"}
	I1016 18:30:56.036116       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.12.51"}
	I1016 18:30:56.484908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:58.942652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:30:59.336284       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:59.336284       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:59.536662       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05] <==
	I1016 18:30:58.934935       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:30:58.934984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:58.934988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:30:58.935239       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:30:58.935267       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:30:58.935342       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:30:58.937749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:58.937780       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:30:58.939864       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:30:58.940748       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:30:58.940820       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:30:58.940907       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:30:58.940920       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:30:58.940927       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:30:58.942199       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:30:58.944513       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:30:58.946659       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:30:58.950257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:30:58.950384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:58.952772       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:30:58.954969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:30:58.958881       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:30:58.961047       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:30:58.962617       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:58.967328       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e61c60b433b3d2dc3a6ff511f85889007a52b6b282238326838c23b4a470fdf8] <==
	I1016 18:30:56.413295       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:56.470737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:56.570933       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:56.570976       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 18:30:56.571087       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:56.594320       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:56.594396       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:56.600853       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:56.601373       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:56.601434       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:56.602999       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:56.603032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:56.603083       1 config.go:200] "Starting service config controller"
	I1016 18:30:56.603131       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:56.603148       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:56.603154       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:56.603270       1 config.go:309] "Starting node config controller"
	I1016 18:30:56.603278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:56.603292       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:56.703869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:56.703916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:30:56.703925       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a] <==
	I1016 18:30:55.563357       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:55.566601       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:55.566704       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:55.567760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:55.567905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 18:30:55.574054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 18:30:55.590590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:30:55.598073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:30:55.598420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:30:55.598706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:30:55.599064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:30:55.599392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:30:55.599768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:30:55.600045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:30:55.603194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:30:55.603225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:30:55.603248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:30:55.603307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:30:55.603304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:30:55.603381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:30:55.603407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:30:55.603423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:30:55.603518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:30:55.603605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1016 18:30:56.867419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:31:03 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:03.057296     727 scope.go:117] "RemoveContainer" containerID="160e6afea29ad901958f1b8969a8e6a2e37e448f30dc86433b0cfb261235be51"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:04.063650     727 scope.go:117] "RemoveContainer" containerID="160e6afea29ad901958f1b8969a8e6a2e37e448f30dc86433b0cfb261235be51"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:04.063813     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:04.064295     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:05.069435     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:05.069576     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:05.127436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:31:08 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:08.106500     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h7jqr" podStartSLOduration=1.5325373089999998 podStartE2EDuration="9.106474912s" podCreationTimestamp="2025-10-16 18:30:59 +0000 UTC" firstStartedPulling="2025-10-16 18:30:59.820253363 +0000 UTC m=+6.912960488" lastFinishedPulling="2025-10-16 18:31:07.394190981 +0000 UTC m=+14.486898091" observedRunningTime="2025-10-16 18:31:08.10614737 +0000 UTC m=+15.198854526" watchObservedRunningTime="2025-10-16 18:31:08.106474912 +0000 UTC m=+15.199182042"
	Oct 16 18:31:13 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:13.809294     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:15.098503     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:15.098840     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:15.099060     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:23 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:23.808320     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:23 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:23.808536     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:27 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:27.136277     727 scope.go:117] "RemoveContainer" containerID="03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587"
	Oct 16 18:31:37 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:37.999573     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:38.172783     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:38.174514     727 scope.go:117] "RemoveContainer" containerID="e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:38.174784     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:43 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:43.809364     727 scope.go:117] "RemoveContainer" containerID="e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	Oct 16 18:31:43 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:43.809600     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: kubelet.service: Consumed 1.942s CPU time.
	
	
	==> kubernetes-dashboard [ea8b339d31e4fb6b38988c306bb020b4436eeba762aa1a960b6697e387d1a153] <==
	2025/10/16 18:31:07 Starting overwatch
	2025/10/16 18:31:07 Using namespace: kubernetes-dashboard
	2025/10/16 18:31:07 Using in-cluster config to connect to apiserver
	2025/10/16 18:31:07 Using secret token for csrf signing
	2025/10/16 18:31:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:31:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:31:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:31:07 Generating JWE encryption key
	2025/10/16 18:31:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:31:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:31:07 Initializing JWE encryption key from synchronized object
	2025/10/16 18:31:07 Creating in-cluster Sidecar client
	2025/10/16 18:31:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:31:07 Serving insecurely on HTTP port: 9090
	2025/10/16 18:31:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587] <==
	I1016 18:30:56.369029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:31:26.371194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b] <==
	I1016 18:31:27.194344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1016 18:31:27.202729       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:31:27.202784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:31:27.205277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:30.659544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:34.920009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:38.518548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:41.572132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.596933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.603971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:44.604161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:31:44.604656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9f3852a-feb3-4f6a-a138-16ba01201036", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2 became leader
	I1016 18:31:44.604694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2!
	W1016 18:31:44.610624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.616614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:44.705047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2!
	W1016 18:31:46.620515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:46.625808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:48.629977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:48.634685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:50.638422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:50.642783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:52.647608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:52.655142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257: exit status 2 (368.371434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-523257
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-523257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	        "Created": "2025-10-16T18:29:11.800479319Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278386,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-16T18:30:46.146114431Z",
	            "FinishedAt": "2025-10-16T18:30:43.097053859Z"
	        },
	        "Image": "sha256:496f866fde942b6f85dc3d23428f27ed11a5cd69b522133c43b8dc97e7575c9e",
	        "ResolvConfPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/hosts",
	        "LogPath": "/var/lib/docker/containers/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0/b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0-json.log",
	        "Name": "/default-k8s-diff-port-523257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-523257:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-523257",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b0bbc4eeeb33edc712317fb55e6d5ca0004d4f23de07fb3d5a3d35bee63fadc0",
	                "LowerDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e-init/diff:/var/lib/docker/overlay2/434a3d607cafd69d4c1f9e0638eb88f2d2613332686c16aff22e88f900d12053/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c55bed1f62478cc2c96719d866ecf1124db59b51bd2a9657261f8e58e8a903e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-523257",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-523257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-523257",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-523257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "995af03ecb2973a541b0e9b3911ec2f6e4d5dfcbfa552004ae12e29ceef5157c",
	            "SandboxKey": "/var/run/docker/netns/995af03ecb29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-523257": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:b1:5d:27:87:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "18ba3d11487252e3067f2e3b5f472d435c8e0f7e30303d875809bd325d5e3e3d",
	                    "EndpointID": "81e993dbf91e95cb698fea8d38c8713fbb65cbaae78f2e4143deb34ba11f6284",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-523257",
	                        "b0bbc4eeeb33"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257: exit status 2 (339.386468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-523257 logs -n 25: (1.358542905s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-084411 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat docker --no-pager                                                                                                                │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo docker system info                                                                                                                             │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cri-dockerd --version                                                                                                                          │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo containerd config dump                                                                                                                         │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ image   │ default-k8s-diff-port-523257 image list --format=json                                                                                                              │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ pause   │ -p default-k8s-diff-port-523257 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-523257 │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	│ ssh     │ -p auto-084411 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ ssh     │ -p auto-084411 sudo crio config                                                                                                                                    │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ delete  │ -p auto-084411                                                                                                                                                     │ auto-084411                  │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │ 16 Oct 25 18:31 UTC │
	│ start   │ -p custom-flannel-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-084411        │ jenkins │ v1.37.0 │ 16 Oct 25 18:31 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 18:31:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 18:31:54.544140  298340 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:31:54.544286  298340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:54.544294  298340 out.go:374] Setting ErrFile to fd 2...
	I1016 18:31:54.544300  298340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:31:54.544622  298340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:31:54.545359  298340 out.go:368] Setting JSON to false
	I1016 18:31:54.546876  298340 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4463,"bootTime":1760635052,"procs":389,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:31:54.546982  298340 start.go:141] virtualization: kvm guest
	I1016 18:31:54.549607  298340 out.go:179] * [custom-flannel-084411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:31:54.551294  298340 notify.go:220] Checking for updates...
	I1016 18:31:54.551316  298340 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:31:54.552827  298340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:31:54.556202  298340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:31:54.557983  298340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:31:54.559327  298340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:31:54.560595  298340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:31:54.562383  298340 config.go:182] Loaded profile config "calico-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:54.562482  298340 config.go:182] Loaded profile config "default-k8s-diff-port-523257": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:54.562559  298340 config.go:182] Loaded profile config "kindnet-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:31:54.562653  298340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:31:54.591106  298340 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:31:54.591199  298340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:54.651936  298340 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:31:54.641028779 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:31:54.652079  298340 docker.go:318] overlay module found
	I1016 18:31:54.653919  298340 out.go:179] * Using the docker driver based on user configuration
	I1016 18:31:54.655753  298340 start.go:305] selected driver: docker
	I1016 18:31:54.655772  298340 start.go:925] validating driver "docker" against <nil>
	I1016 18:31:54.655787  298340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:31:54.656476  298340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:31:54.722895  298340 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:31:54.71101128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:31:54.723131  298340 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 18:31:54.723446  298340 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1016 18:31:54.725495  298340 out.go:179] * Using Docker driver with root privileges
	I1016 18:31:54.726836  298340 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1016 18:31:54.726871  298340 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1016 18:31:54.726961  298340 start.go:349] cluster config:
	{Name:custom-flannel-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-084411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 18:31:54.728398  298340 out.go:179] * Starting "custom-flannel-084411" primary control-plane node in "custom-flannel-084411" cluster
	I1016 18:31:54.729581  298340 cache.go:123] Beginning downloading kic base image for docker with crio
	I1016 18:31:54.730769  298340 out.go:179] * Pulling base image v0.0.48-1760363564-21724 ...
	I1016 18:31:54.731978  298340 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1016 18:31:54.732034  298340 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1016 18:31:54.732049  298340 cache.go:58] Caching tarball of preloaded images
	I1016 18:31:54.732107  298340 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon
	I1016 18:31:54.732176  298340 preload.go:233] Found /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1016 18:31:54.732186  298340 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1016 18:31:54.732294  298340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/custom-flannel-084411/config.json ...
	I1016 18:31:54.732315  298340 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/custom-flannel-084411/config.json: {Name:mk386d1f97e8f55cf65df1ce9af1437428f36a48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1016 18:31:54.756493  298340 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 in local docker daemon, skipping pull
	I1016 18:31:54.756533  298340 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 exists in daemon, skipping load
	I1016 18:31:54.756554  298340 cache.go:232] Successfully downloaded all kic artifacts
	I1016 18:31:54.756591  298340 start.go:360] acquireMachinesLock for custom-flannel-084411: {Name:mka604a92a2d10df76c86e424931c9ffdfd8c6eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1016 18:31:54.756708  298340 start.go:364] duration metric: took 96.011µs to acquireMachinesLock for "custom-flannel-084411"
	I1016 18:31:54.756753  298340 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-084411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-084411 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1016 18:31:54.756848  298340 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 16 18:31:14 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:14.178122805Z" level=info msg="Started container" PID=1721 containerID=ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper id=a5065fa4-daaf-4f0d-80f3-eafe69f284f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=43bab1f94f3047820e7dca06866a5d239eaaa3050c0fa2545f35cc236f3f24ce
	Oct 16 18:31:15 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:15.100081349Z" level=info msg="Removing container: 583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316" id=9a3e9470-eb5d-4ab6-817e-93032447c499 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:15 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:15.113988974Z" level=info msg="Removed container 583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=9a3e9470-eb5d-4ab6-817e-93032447c499 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.136613632Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8cbcb1fd-022f-43d7-aa12-5dae0717f3cd name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.137593892Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=05fe5392-f90d-40e0-a59b-2b640961645f name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.13874215Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=37328d1c-a130-49fe-8afe-9377052ea740 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.139015483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.143826404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.143992427Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ca2c74be7a7b14ea6b3cfbb63ed8ab26a5f807ad3645783da2d585b327e8aba1/merged/etc/passwd: no such file or directory"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.144019932Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ca2c74be7a7b14ea6b3cfbb63ed8ab26a5f807ad3645783da2d585b327e8aba1/merged/etc/group: no such file or directory"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.144258997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.177174391Z" level=info msg="Created container 647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b: kube-system/storage-provisioner/storage-provisioner" id=37328d1c-a130-49fe-8afe-9377052ea740 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.177874228Z" level=info msg="Starting container: 647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b" id=5be7fd15-5ff5-44c5-bdcd-751ab3e982ce name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:31:27 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:27.18019666Z" level=info msg="Started container" PID=1735 containerID=647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b description=kube-system/storage-provisioner/storage-provisioner id=5be7fd15-5ff5-44c5-bdcd-751ab3e982ce name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8845965dad145a352b172550b7a92545e62bb5e151ee5d68643864bd5a72862
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.00019735Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=814dbded-4e45-4786-842e-cd1fd54d29de name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.001362649Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9e2bd71-1c80-4e13-a0f2-1847af01ecd2 name=/runtime.v1.ImageService/ImageStatus
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.002434454Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=f88347e8-c520-4de7-bc80-277b43b36725 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.002709475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.010283208Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.010924613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.051984114Z" level=info msg="Created container e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=f88347e8-c520-4de7-bc80-277b43b36725 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.052797473Z" level=info msg="Starting container: e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92" id=15922e51-57de-4c03-890e-8cdc96581c3c name=/runtime.v1.RuntimeService/StartContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.055143034Z" level=info msg="Started container" PID=1770 containerID=e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper id=15922e51-57de-4c03-890e-8cdc96581c3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=43bab1f94f3047820e7dca06866a5d239eaaa3050c0fa2545f35cc236f3f24ce
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.17697885Z" level=info msg="Removing container: ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9" id=b2f8a20b-c5f3-4a4b-b473-5169d8420f98 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 16 18:31:38 default-k8s-diff-port-523257 crio[567]: time="2025-10-16T18:31:38.189851579Z" level=info msg="Removed container ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9/dashboard-metrics-scraper" id=b2f8a20b-c5f3-4a4b-b473-5169d8420f98 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e78a709a1e982       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago       Exited              dashboard-metrics-scraper   3                   43bab1f94f304       dashboard-metrics-scraper-6ffb444bf9-fwnm9             kubernetes-dashboard
	647cec4bbb472       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   f8845965dad14       storage-provisioner                                    kube-system
	ea8b339d31e4f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   3c2b952fddade       kubernetes-dashboard-855c9754f9-h7jqr                  kubernetes-dashboard
	ab2f53987fdb5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   895decb37c6c0       coredns-66bc5c9577-jx8q2                               kube-system
	7f8d671cdb996       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   d9d083348b954       busybox                                                default
	e61c60b433b3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           59 seconds ago       Running             kube-proxy                  0                   0f14e38265cf6       kube-proxy-hrdcg                                       kube-system
	9b8d270e35020       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   5a5042c48155a       kindnet-bctzw                                          kube-system
	03a3db6c20e6f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   f8845965dad14       storage-provisioner                                    kube-system
	04779c28f1cb8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   da877dd27a4f3       etcd-default-k8s-diff-port-523257                      kube-system
	0b66af6e1e6d7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   0fabcffaf1a89       kube-apiserver-default-k8s-diff-port-523257            kube-system
	b18e9cf1502f7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   239a7828b70dc       kube-scheduler-default-k8s-diff-port-523257            kube-system
	9b2c049fb89ee       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   20f55313e4a03       kube-controller-manager-default-k8s-diff-port-523257   kube-system
	
	
	==> coredns [ab2f53987fdb5f62ac2f6ecbf2cad5d434aa5db3641d2794a69fafe85c7ae170] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46364 - 4761 "HINFO IN 4538672824164964382.2473203377410024508. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027320412s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-523257
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-523257
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff53908eeb4c5186cf96060d3a2725845a066caf
	                    minikube.k8s.io/name=default-k8s-diff-port-523257
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_16T18_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 16 Oct 2025 18:29:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-523257
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 16 Oct 2025 18:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:29:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 16 Oct 2025 18:31:46 +0000   Thu, 16 Oct 2025 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-523257
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 66f4c9907daa2bafbf78246f68ed04ba
	  System UUID:                dd8663d9-3eb1-4047-bb84-b123d51b045c
	  Boot ID:                    a4e93efe-c4ba-4d82-8cea-58e683ae0e22
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-jx8q2                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m23s
	  kube-system                 etcd-default-k8s-diff-port-523257                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m30s
	  kube-system                 kindnet-bctzw                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-523257             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-523257    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-hrdcg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-523257             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fwnm9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h7jqr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m22s              kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m29s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s              kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m29s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m25s              node-controller  Node default-k8s-diff-port-523257 event: Registered Node default-k8s-diff-port-523257 in Controller
	  Normal  NodeReady                102s               kubelet          Node default-k8s-diff-port-523257 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-523257 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node default-k8s-diff-port-523257 event: Registered Node default-k8s-diff-port-523257 in Controller
	
	
	==> dmesg <==
	[  +0.092247] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026649] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.272634] kauditd_printk_skb: 47 callbacks suppressed
	[Oct16 17:46] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.008703] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023940] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.022984] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023881] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +1.023933] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +2.047848] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +4.031714] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[  +8.063410] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[ +16.382910] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	[Oct16 17:47] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 f5 1f ca 83 db 6a 74 55 c6 78 79 08 00
	
	
	==> etcd [04779c28f1cb8c52ec504e348fc93fc81c1b41fa21e6a652062eeab076efcbb7] <==
	{"level":"warn","ts":"2025-10-16T18:30:54.787048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.795065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.805172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.814318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.822220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.829239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.835866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.843482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.850867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.857520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.865560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.872568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.880494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.892660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.900111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.907223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.913667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.921920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.929029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.936998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.944748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.951347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.965624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.973787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-16T18:30:54.982191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57746","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:31:56 up  1:14,  0 user,  load average: 8.02, 4.73, 2.61
	Linux default-k8s-diff-port-523257 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b8d270e350203a5340ad6d9042b73e17d91cd1645c28c1832675d24a7810006] <==
	I1016 18:30:56.546966       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1016 18:30:56.547196       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1016 18:30:56.547344       1 main.go:148] setting mtu 1500 for CNI 
	I1016 18:30:56.547362       1 main.go:178] kindnetd IP family: "ipv4"
	I1016 18:30:56.547388       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-16T18:30:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1016 18:30:56.843374       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1016 18:30:56.942408       1 controller.go:381] "Waiting for informer caches to sync"
	I1016 18:30:56.942857       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1016 18:30:56.943680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1016 18:30:57.243793       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1016 18:30:57.243822       1 metrics.go:72] Registering metrics
	I1016 18:30:57.243892       1 controller.go:711] "Syncing nftables rules"
	I1016 18:31:06.843788       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:06.843863       1 main.go:301] handling current node
	I1016 18:31:16.843824       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:16.843871       1 main.go:301] handling current node
	I1016 18:31:26.843780       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:26.843816       1 main.go:301] handling current node
	I1016 18:31:36.843807       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:36.843864       1 main.go:301] handling current node
	I1016 18:31:46.843623       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1016 18:31:46.843665       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0b66af6e1e6d7fd2735eb36e2ebf313e19ff23b7b1b8b97956469bf3c79a9f5f] <==
	I1016 18:30:55.604395       1 cache.go:39] Caches are synced for autoregister controller
	I1016 18:30:55.608991       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1016 18:30:55.612689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1016 18:30:55.652141       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1016 18:30:55.665588       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1016 18:30:55.665623       1 policy_source.go:240] refreshing policies
	I1016 18:30:55.681067       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1016 18:30:55.681113       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1016 18:30:55.681350       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1016 18:30:55.684457       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1016 18:30:55.684483       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1016 18:30:55.684932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1016 18:30:55.691428       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1016 18:30:55.898616       1 controller.go:667] quota admission added evaluator for: namespaces
	I1016 18:30:55.927965       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1016 18:30:55.952443       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1016 18:30:55.959873       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1016 18:30:55.969085       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1016 18:30:56.023498       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.76.171"}
	I1016 18:30:56.036116       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.12.51"}
	I1016 18:30:56.484908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1016 18:30:58.942652       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1016 18:30:59.336284       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:59.336284       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1016 18:30:59.536662       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9b2c049fb89ee7ff479ec6255ed7c0c81b6c9f0faf4d8e9c462dcc7f723f7e05] <==
	I1016 18:30:58.934935       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1016 18:30:58.934984       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1016 18:30:58.934988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1016 18:30:58.935239       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1016 18:30:58.935267       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1016 18:30:58.935342       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1016 18:30:58.937749       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1016 18:30:58.937780       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1016 18:30:58.939864       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1016 18:30:58.940748       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1016 18:30:58.940820       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1016 18:30:58.940907       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1016 18:30:58.940920       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1016 18:30:58.940927       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1016 18:30:58.942199       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1016 18:30:58.944513       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1016 18:30:58.946659       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1016 18:30:58.950257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1016 18:30:58.950384       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1016 18:30:58.952772       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1016 18:30:58.954969       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1016 18:30:58.958881       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1016 18:30:58.961047       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1016 18:30:58.962617       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1016 18:30:58.967328       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e61c60b433b3d2dc3a6ff511f85889007a52b6b282238326838c23b4a470fdf8] <==
	I1016 18:30:56.413295       1 server_linux.go:53] "Using iptables proxy"
	I1016 18:30:56.470737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1016 18:30:56.570933       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1016 18:30:56.570976       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1016 18:30:56.571087       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1016 18:30:56.594320       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1016 18:30:56.594396       1 server_linux.go:132] "Using iptables Proxier"
	I1016 18:30:56.600853       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1016 18:30:56.601373       1 server.go:527] "Version info" version="v1.34.1"
	I1016 18:30:56.601434       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:56.602999       1 config.go:403] "Starting serviceCIDR config controller"
	I1016 18:30:56.603032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1016 18:30:56.603083       1 config.go:200] "Starting service config controller"
	I1016 18:30:56.603131       1 config.go:106] "Starting endpoint slice config controller"
	I1016 18:30:56.603148       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1016 18:30:56.603154       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1016 18:30:56.603270       1 config.go:309] "Starting node config controller"
	I1016 18:30:56.603278       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1016 18:30:56.603292       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1016 18:30:56.703869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1016 18:30:56.703916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1016 18:30:56.703925       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b18e9cf1502f711153aae166f07b5f02021e0507c8f195aece2617ed442e892a] <==
	I1016 18:30:55.563357       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1016 18:30:55.566601       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:55.566704       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1016 18:30:55.567760       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1016 18:30:55.567905       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1016 18:30:55.574054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1016 18:30:55.590590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1016 18:30:55.598073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1016 18:30:55.598420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1016 18:30:55.598706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1016 18:30:55.599064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1016 18:30:55.599392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1016 18:30:55.599768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1016 18:30:55.600045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1016 18:30:55.603194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1016 18:30:55.603225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1016 18:30:55.603248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1016 18:30:55.603307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1016 18:30:55.603304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1016 18:30:55.603381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1016 18:30:55.603407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1016 18:30:55.603423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1016 18:30:55.603518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1016 18:30:55.603605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1016 18:30:56.867419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 16 18:31:03 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:03.057296     727 scope.go:117] "RemoveContainer" containerID="160e6afea29ad901958f1b8969a8e6a2e37e448f30dc86433b0cfb261235be51"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:04.063650     727 scope.go:117] "RemoveContainer" containerID="160e6afea29ad901958f1b8969a8e6a2e37e448f30dc86433b0cfb261235be51"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:04.063813     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:04 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:04.064295     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:05.069435     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:05.069576     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:05 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:05.127436     727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 16 18:31:08 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:08.106500     727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h7jqr" podStartSLOduration=1.5325373089999998 podStartE2EDuration="9.106474912s" podCreationTimestamp="2025-10-16 18:30:59 +0000 UTC" firstStartedPulling="2025-10-16 18:30:59.820253363 +0000 UTC m=+6.912960488" lastFinishedPulling="2025-10-16 18:31:07.394190981 +0000 UTC m=+14.486898091" observedRunningTime="2025-10-16 18:31:08.10614737 +0000 UTC m=+15.198854526" watchObservedRunningTime="2025-10-16 18:31:08.106474912 +0000 UTC m=+15.199182042"
	Oct 16 18:31:13 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:13.809294     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:15.098503     727 scope.go:117] "RemoveContainer" containerID="583f796a40a2d7b64d8a0ff893a0aed4e0bb3a002aca914ef98cfeeeb2bc0316"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:15.098840     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:15 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:15.099060     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:23 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:23.808320     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:23 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:23.808536     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:27 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:27.136277     727 scope.go:117] "RemoveContainer" containerID="03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587"
	Oct 16 18:31:37 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:37.999573     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:38.172783     727 scope.go:117] "RemoveContainer" containerID="ff3449af929a9b113391e1ad8e07e8db3a119356c9f106d3f1e2514594ad32e9"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:38.174514     727 scope.go:117] "RemoveContainer" containerID="e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	Oct 16 18:31:38 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:38.174784     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:43 default-k8s-diff-port-523257 kubelet[727]: I1016 18:31:43.809364     727 scope.go:117] "RemoveContainer" containerID="e78a709a1e982b94959494ba3fcfe8d1d1c105e0303753e1f0337482c2a83b92"
	Oct 16 18:31:43 default-k8s-diff-port-523257 kubelet[727]: E1016 18:31:43.809600     727 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fwnm9_kubernetes-dashboard(76cce414-2912-44ba-94e5-1dd398c2a5bd)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fwnm9" podUID="76cce414-2912-44ba-94e5-1dd398c2a5bd"
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 16 18:31:49 default-k8s-diff-port-523257 systemd[1]: kubelet.service: Consumed 1.942s CPU time.
	
	
	==> kubernetes-dashboard [ea8b339d31e4fb6b38988c306bb020b4436eeba762aa1a960b6697e387d1a153] <==
	2025/10/16 18:31:07 Using namespace: kubernetes-dashboard
	2025/10/16 18:31:07 Using in-cluster config to connect to apiserver
	2025/10/16 18:31:07 Using secret token for csrf signing
	2025/10/16 18:31:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/16 18:31:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/16 18:31:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/16 18:31:07 Generating JWE encryption key
	2025/10/16 18:31:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/16 18:31:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/16 18:31:07 Initializing JWE encryption key from synchronized object
	2025/10/16 18:31:07 Creating in-cluster Sidecar client
	2025/10/16 18:31:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:31:07 Serving insecurely on HTTP port: 9090
	2025/10/16 18:31:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/16 18:31:07 Starting overwatch
	
	
	==> storage-provisioner [03a3db6c20e6f61d8de12e3b0e8dfa40712be1a186100fddf7ff3c5d3a2e0587] <==
	I1016 18:30:56.369029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1016 18:31:26.371194       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [647cec4bbb47274dc1420ae531b76d776191e13d13b9fd04b9491583d76e562b] <==
	I1016 18:31:27.202729       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1016 18:31:27.202784       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1016 18:31:27.205277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:30.659544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:34.920009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:38.518548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:41.572132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.596933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.603971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:44.604161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1016 18:31:44.604656       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9f3852a-feb3-4f6a-a138-16ba01201036", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2 became leader
	I1016 18:31:44.604694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2!
	W1016 18:31:44.610624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:44.616614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1016 18:31:44.705047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-523257_7a5e7606-c7cb-4807-9f24-190560a34cc2!
	W1016 18:31:46.620515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:46.625808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:48.629977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:48.634685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:50.638422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:50.642783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:52.647608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:52.655142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:54.658661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1016 18:31:54.664320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257: exit status 2 (341.105097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.65s)
E1016 18:33:11.190157   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.196596   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.208133   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.229566   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.271036   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.352824   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.514402   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:11.836178   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:12.477557   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:13.759358   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:16.321550   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.02
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.47
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.81
22 TestOffline 93.27
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 130.94
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
48 TestAddons/StoppedEnableDisable 18.51
49 TestCertOptions 27.2
50 TestCertExpiration 211.79
52 TestForceSystemdFlag 28.99
53 TestForceSystemdEnv 28.16
55 TestKVMDriverInstallOrUpdate 0.56
59 TestErrorSpam/setup 21.27
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 6.1
63 TestErrorSpam/unpause 5.32
64 TestErrorSpam/stop 2.54
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 66.59
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.2
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
76 TestFunctional/serial/CacheCmd/cache/add_local 0.78
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 45.74
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.26
87 TestFunctional/serial/LogsFileCmd 1.28
88 TestFunctional/serial/InvalidService 3.95
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 6.42
92 TestFunctional/parallel/DryRun 0.38
93 TestFunctional/parallel/InternationalLanguage 0.18
94 TestFunctional/parallel/StatusCmd 0.99
99 TestFunctional/parallel/AddonsCmd 0.12
100 TestFunctional/parallel/PersistentVolumeClaim 22.85
102 TestFunctional/parallel/SSHCmd 0.59
103 TestFunctional/parallel/CpCmd 1.83
104 TestFunctional/parallel/MySQL 19.29
105 TestFunctional/parallel/FileSync 0.32
106 TestFunctional/parallel/CertSync 1.78
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
114 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.58
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 1.89
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
122 TestFunctional/parallel/ImageCommands/Setup 0.56
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.2
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
146 TestFunctional/parallel/ProfileCmd/profile_list 0.39
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
148 TestFunctional/parallel/MountCmd/any-port 5.99
149 TestFunctional/parallel/MountCmd/specific-port 1.95
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 149.77
164 TestMultiControlPlane/serial/DeployApp 5.74
165 TestMultiControlPlane/serial/PingHostFromPods 0.96
166 TestMultiControlPlane/serial/AddWorkerNode 54.33
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
169 TestMultiControlPlane/serial/CopyFile 16.88
170 TestMultiControlPlane/serial/StopSecondaryNode 19.77
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
172 TestMultiControlPlane/serial/RestartSecondaryNode 8.97
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 104.78
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.58
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 42.18
178 TestMultiControlPlane/serial/RestartCluster 52.77
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
180 TestMultiControlPlane/serial/AddSecondaryNode 76.98
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 38.86
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.01
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 30.07
211 TestKicCustomNetwork/use_default_bridge_network 25.16
212 TestKicExistingNetwork 25.86
213 TestKicCustomSubnet 24.91
214 TestKicStaticIP 24.11
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 49.3
219 TestMountStart/serial/StartWithMountFirst 5.23
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.55
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.27
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 63.25
231 TestMultiNode/serial/DeployApp2Nodes 3.61
232 TestMultiNode/serial/PingHostFrom2Pods 0.65
233 TestMultiNode/serial/AddNode 24.79
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.62
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.5
239 TestMultiNode/serial/RestartKeepsNodes 57.88
240 TestMultiNode/serial/DeleteNode 5
241 TestMultiNode/serial/StopMultiNode 28.52
242 TestMultiNode/serial/RestartMultiNode 45.62
243 TestMultiNode/serial/ValidateNameConflict 23.97
248 TestPreload 116.05
250 TestScheduledStopUnix 96.11
253 TestInsufficientStorage 9.83
254 TestRunningBinaryUpgrade 70.37
256 TestKubernetesUpgrade 299.53
257 TestMissingContainerUpgrade 77.9
258 TestStoppedBinaryUpgrade/Setup 0.55
267 TestPause/serial/Start 61.18
268 TestStoppedBinaryUpgrade/Upgrade 72.39
269 TestPause/serial/SecondStartNoReconfiguration 6.43
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
273 TestNoKubernetes/serial/StartWithK8s 26.17
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.52
282 TestNetworkPlugins/group/false 4.94
286 TestNoKubernetes/serial/StartWithStopK8s 19.44
287 TestNoKubernetes/serial/Start 6.67
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
289 TestNoKubernetes/serial/ProfileList 2.21
290 TestNoKubernetes/serial/Stop 1.31
291 TestNoKubernetes/serial/StartNoArgs 6.58
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
294 TestStartStop/group/old-k8s-version/serial/FirstStart 51.08
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.31
298 TestStartStop/group/no-preload/serial/FirstStart 52.68
299 TestStartStop/group/old-k8s-version/serial/Stop 17.52
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
301 TestStartStop/group/old-k8s-version/serial/SecondStart 51.64
302 TestStartStop/group/no-preload/serial/DeployApp 8.27
304 TestStartStop/group/no-preload/serial/Stop 16.2
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/no-preload/serial/SecondStart 50.24
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
312 TestStartStop/group/embed-certs/serial/FirstStart 43.56
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.73
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/embed-certs/serial/DeployApp 8.23
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
321 TestStartStop/group/embed-certs/serial/Stop 18.09
323 TestStartStop/group/newest-cni/serial/FirstStart 26.65
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/embed-certs/serial/SecondStart 44.87
326 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
329 TestStartStop/group/newest-cni/serial/Stop 3.37
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/newest-cni/serial/SecondStart 11.15
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.48
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
338 TestNetworkPlugins/group/auto/Start 43.4
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.14
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
345 TestNetworkPlugins/group/kindnet/Start 44.88
346 TestNetworkPlugins/group/calico/Start 51.17
347 TestNetworkPlugins/group/auto/KubeletFlags 0.33
348 TestNetworkPlugins/group/auto/NetCatPod 8.21
349 TestNetworkPlugins/group/auto/DNS 0.14
350 TestNetworkPlugins/group/auto/Localhost 0.1
351 TestNetworkPlugins/group/auto/HairPin 0.11
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/custom-flannel/Start 54.9
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
360 TestNetworkPlugins/group/enable-default-cni/Start 38.8
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/DNS 0.12
363 TestNetworkPlugins/group/kindnet/Localhost 0.09
364 TestNetworkPlugins/group/kindnet/HairPin 0.12
365 TestNetworkPlugins/group/calico/KubeletFlags 0.3
366 TestNetworkPlugins/group/calico/NetCatPod 8.23
367 TestNetworkPlugins/group/calico/DNS 0.13
368 TestNetworkPlugins/group/calico/Localhost 0.12
369 TestNetworkPlugins/group/calico/HairPin 0.12
370 TestNetworkPlugins/group/flannel/Start 47.81
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
373 TestNetworkPlugins/group/bridge/Start 68.54
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
379 TestNetworkPlugins/group/custom-flannel/DNS 0.14
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
384 TestNetworkPlugins/group/flannel/NetCatPod 9.18
385 TestNetworkPlugins/group/flannel/DNS 0.11
386 TestNetworkPlugins/group/flannel/Localhost 0.09
387 TestNetworkPlugins/group/flannel/HairPin 0.09
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 7.22
390 TestNetworkPlugins/group/bridge/DNS 0.11
391 TestNetworkPlugins/group/bridge/Localhost 0.09
392 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-101994 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-101994 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.021569941s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1016 17:43:35.904103   12375 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1016 17:43:35.904195   12375 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-101994
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-101994: exit status 85 (60.672903ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-101994 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-101994 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:43:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:43:30.921885   12387 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:43:30.922154   12387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:30.922164   12387 out.go:374] Setting ErrFile to fd 2...
	I1016 17:43:30.922169   12387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:30.922345   12387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	W1016 17:43:30.922463   12387 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21738-8849/.minikube/config/config.json: open /home/jenkins/minikube-integration/21738-8849/.minikube/config/config.json: no such file or directory
	I1016 17:43:30.922985   12387 out.go:368] Setting JSON to true
	I1016 17:43:30.923885   12387 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1559,"bootTime":1760635052,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:43:30.923972   12387 start.go:141] virtualization: kvm guest
	I1016 17:43:30.926326   12387 out.go:99] [download-only-101994] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1016 17:43:30.926481   12387 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball: no such file or directory
	I1016 17:43:30.926534   12387 notify.go:220] Checking for updates...
	I1016 17:43:30.928087   12387 out.go:171] MINIKUBE_LOCATION=21738
	I1016 17:43:30.929670   12387 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:43:30.930971   12387 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:43:30.932332   12387 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:43:30.933635   12387 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1016 17:43:30.936135   12387 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 17:43:30.936325   12387 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:43:30.959540   12387 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:43:30.959621   12387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:31.379512   12387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-16 17:43:31.369025792 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:31.379632   12387 docker.go:318] overlay module found
	I1016 17:43:31.381275   12387 out.go:99] Using the docker driver based on user configuration
	I1016 17:43:31.381308   12387 start.go:305] selected driver: docker
	I1016 17:43:31.381315   12387 start.go:925] validating driver "docker" against <nil>
	I1016 17:43:31.381396   12387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:31.438402   12387 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-16 17:43:31.428732252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:31.439054   12387 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:43:31.439553   12387 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1016 17:43:31.439756   12387 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 17:43:31.441636   12387 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-101994 host does not exist
	  To start a cluster, run: "minikube start -p download-only-101994"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-101994
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-309311 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-309311 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.474209976s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1016 17:43:39.796902   12375 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1016 17:43:39.796950   12375 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21738-8849/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-309311
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-309311: exit status 85 (64.677401ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-101994 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-101994 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ delete  │ -p download-only-101994                                                                                                                                                   │ download-only-101994 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │ 16 Oct 25 17:43 UTC │
	│ start   │ -o=json --download-only -p download-only-309311 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-309311 │ jenkins │ v1.37.0 │ 16 Oct 25 17:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/16 17:43:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1016 17:43:36.362892   12744 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:43:36.363108   12744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:36.363115   12744 out.go:374] Setting ErrFile to fd 2...
	I1016 17:43:36.363120   12744 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:43:36.363297   12744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:43:36.363778   12744 out.go:368] Setting JSON to true
	I1016 17:43:36.364529   12744 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1564,"bootTime":1760635052,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:43:36.364609   12744 start.go:141] virtualization: kvm guest
	I1016 17:43:36.366467   12744 out.go:99] [download-only-309311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:43:36.366601   12744 notify.go:220] Checking for updates...
	I1016 17:43:36.367805   12744 out.go:171] MINIKUBE_LOCATION=21738
	I1016 17:43:36.369110   12744 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:43:36.370557   12744 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:43:36.371833   12744 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:43:36.372990   12744 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1016 17:43:36.375336   12744 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1016 17:43:36.375601   12744 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:43:36.400589   12744 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:43:36.400730   12744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:36.458007   12744 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-16 17:43:36.447197296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:36.458121   12744 docker.go:318] overlay module found
	I1016 17:43:36.460219   12744 out.go:99] Using the docker driver based on user configuration
	I1016 17:43:36.460250   12744 start.go:305] selected driver: docker
	I1016 17:43:36.460258   12744 start.go:925] validating driver "docker" against <nil>
	I1016 17:43:36.460343   12744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:43:36.517304   12744 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-16 17:43:36.5061849 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:43:36.517441   12744 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1016 17:43:36.517907   12744 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1016 17:43:36.518048   12744 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1016 17:43:36.520072   12744 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-309311 host does not exist
	  To start a cluster, run: "minikube start -p download-only-309311"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-309311
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-369292 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-369292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-369292
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1016 17:43:40.863451   12375 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-905459 --alsologtostderr --binary-mirror http://127.0.0.1:34337 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-905459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-905459
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (93.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-747718 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-747718 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m29.607140183s)
helpers_test.go:175: Cleaning up "offline-crio-747718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-747718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-747718: (3.660294568s)
--- PASS: TestOffline (93.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-431183
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-431183: exit status 85 (51.209436ms)

                                                
                                                
-- stdout --
	* Profile "addons-431183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-431183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-431183
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-431183: exit status 85 (52.467766ms)

                                                
                                                
-- stdout --
	* Profile "addons-431183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-431183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (130.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-431183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-431183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.93533495s)
--- PASS: TestAddons/Setup (130.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-431183 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-431183 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-431183 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-431183 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9bcd0883-4637-415b-979c-50c3856ec728] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9bcd0883-4637-415b-979c-50c3856ec728] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003930872s
addons_test.go:694: (dbg) Run:  kubectl --context addons-431183 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-431183 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-431183 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.51s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-431183
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-431183: (18.251175856s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-431183
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-431183
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-431183
--- PASS: TestAddons/StoppedEnableDisable (18.51s)

                                                
                                    
x
+
TestCertOptions (27.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-817096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1016 18:25:53.251255   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-817096 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.847975992s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-817096 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-817096 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-817096 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-817096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-817096
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-817096: (2.633855834s)
--- PASS: TestCertOptions (27.20s)

                                                
                                    
x
+
TestCertExpiration (211.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-489554 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-489554 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (23.025699061s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-489554 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.299507973s)
helpers_test.go:175: Cleaning up "cert-expiration-489554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-489554
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-489554: (2.460533583s)
--- PASS: TestCertExpiration (211.79s)

                                                
                                    
x
+
TestForceSystemdFlag (28.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-607466 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-607466 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.11072392s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-607466 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-607466" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-607466
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-607466: (2.52823593s)
--- PASS: TestForceSystemdFlag (28.99s)

                                                
                                    
x
+
TestForceSystemdEnv (28.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-275318 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-275318 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.703871418s)
helpers_test.go:175: Cleaning up "force-systemd-env-275318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-275318
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-275318: (2.452211906s)
--- PASS: TestForceSystemdEnv (28.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1016 18:25:20.217903   12375 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1016 18:25:20.218097   12375 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3650565946/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1016 18:25:20.251765   12375 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3650565946/001/docker-machine-driver-kvm2 version is 1.1.1
W1016 18:25:20.251820   12375 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1016 18:25:20.251979   12375 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1016 18:25:20.252031   12375 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3650565946/001/docker-machine-driver-kvm2
I1016 18:25:20.634060   12375 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3650565946/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1016 18:25:20.651968   12375 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3650565946/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.56s)

                                                
                                    
x
+
TestErrorSpam/setup (21.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-098160 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-098160 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-098160 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-098160 --driver=docker  --container-runtime=crio: (21.273035523s)
--- PASS: TestErrorSpam/setup (21.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.1s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause: exit status 80 (2.079575732s)

                                                
                                                
-- stdout --
	* Pausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause: exit status 80 (1.973232085s)

                                                
                                                
-- stdout --
	* Pausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:31Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause: exit status 80 (2.044839449s)

                                                
                                                
-- stdout --
	* Pausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.10s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause: exit status 80 (1.555562976s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause: exit status 80 (1.893889471s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause: exit status 80 (1.865143179s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-098160 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-16T17:49:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.32s)

                                                
                                    
x
+
TestErrorSpam/stop (2.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 stop: (2.364603856s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-098160 --log_dir /tmp/nospam-098160 stop
--- PASS: TestErrorSpam/stop (2.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21738-8849/.minikube/files/etc/test/nested/copy/12375/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-363627 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.593574318s)
--- PASS: TestFunctional/serial/StartWithProxy (66.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1016 17:50:52.893190   12375 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --alsologtostderr -v=8
E1016 17:50:53.251528   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.257938   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.269345   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.290826   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.332254   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.414457   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.576684   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:53.898775   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:54.541020   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:55.822959   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:50:58.384234   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-363627 --alsologtostderr -v=8: (6.195244781s)
functional_test.go:678: soft start took 6.196097888s for "functional-363627" cluster.
I1016 17:50:59.088834   12375 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-363627 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-363627 /tmp/TestFunctionalserialCacheCmdcacheadd_local2501425831/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache add minikube-local-cache-test:functional-363627
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache delete minikube-local-cache-test:functional-363627
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-363627
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1016 17:51:03.506318   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.162046ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 kubectl -- --context functional-363627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-363627 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1016 17:51:13.748272   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:51:34.230530   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-363627 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.741602497s)
functional_test.go:776: restart took 45.741732885s for "functional-363627" cluster.
I1016 17:51:50.548699   12375 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (45.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-363627 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 logs: (1.262332696s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 logs --file /tmp/TestFunctionalserialLogsFileCmd4179872602/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 logs --file /tmp/TestFunctionalserialLogsFileCmd4179872602/001/logs.txt: (1.281624607s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-363627 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-363627
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-363627: exit status 115 (341.734769ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32152 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-363627 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 config get cpus: exit status 14 (73.202043ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 config get cpus: exit status 14 (54.233365ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-363627 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-363627 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51045: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.627467ms)

                                                
                                                
-- stdout --
	* [functional-363627] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:52:20.096653   50621 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:52:20.096967   50621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:20.096978   50621 out.go:374] Setting ErrFile to fd 2...
	I1016 17:52:20.096985   50621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:20.097217   50621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:52:20.097676   50621 out.go:368] Setting JSON to false
	I1016 17:52:20.098644   50621 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2088,"bootTime":1760635052,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:52:20.098754   50621 start.go:141] virtualization: kvm guest
	I1016 17:52:20.101467   50621 out.go:179] * [functional-363627] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 17:52:20.103278   50621 notify.go:220] Checking for updates...
	I1016 17:52:20.103336   50621 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:52:20.104847   50621 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:52:20.106148   50621 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:52:20.107583   50621 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:52:20.108947   50621 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:52:20.110581   50621 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:52:20.112483   50621 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:52:20.113213   50621 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:52:20.138177   50621 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:52:20.138327   50621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:52:20.200183   50621 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-16 17:52:20.190000546 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:52:20.200289   50621 docker.go:318] overlay module found
	I1016 17:52:20.202207   50621 out.go:179] * Using the docker driver based on existing profile
	I1016 17:52:20.203924   50621 start.go:305] selected driver: docker
	I1016 17:52:20.203940   50621 start.go:925] validating driver "docker" against &{Name:functional-363627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-363627 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:52:20.204028   50621 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:52:20.206376   50621 out.go:203] 
	W1016 17:52:20.207972   50621 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1016 17:52:20.209573   50621 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-363627 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.291496ms)

                                                
                                                
-- stdout --
	* [functional-363627] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 17:52:22.434777   51117 out.go:360] Setting OutFile to fd 1 ...
	I1016 17:52:22.434887   51117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:22.434895   51117 out.go:374] Setting ErrFile to fd 2...
	I1016 17:52:22.434900   51117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 17:52:22.435203   51117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 17:52:22.435630   51117 out.go:368] Setting JSON to false
	I1016 17:52:22.436581   51117 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2090,"bootTime":1760635052,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 17:52:22.436670   51117 start.go:141] virtualization: kvm guest
	I1016 17:52:22.438491   51117 out.go:179] * [functional-363627] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1016 17:52:22.440448   51117 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 17:52:22.440496   51117 notify.go:220] Checking for updates...
	I1016 17:52:22.443146   51117 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 17:52:22.444213   51117 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 17:52:22.445495   51117 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 17:52:22.446833   51117 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 17:52:22.451341   51117 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 17:52:22.453340   51117 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 17:52:22.453986   51117 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 17:52:22.484512   51117 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 17:52:22.484618   51117 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 17:52:22.557373   51117 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-16 17:52:22.543886878 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 17:52:22.557513   51117 docker.go:318] overlay module found
	I1016 17:52:22.560121   51117 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1016 17:52:22.561495   51117 start.go:305] selected driver: docker
	I1016 17:52:22.561513   51117 start.go:925] validating driver "docker" against &{Name:functional-363627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760363564-21724@sha256:3d243c9fb0952e24526c917e5809c5ed926108eae97e8156b6e33fc1d2564225 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-363627 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1016 17:52:22.561645   51117 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 17:52:22.563759   51117 out.go:203] 
	W1016 17:52:22.565516   51117 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1016 17:52:22.566770   51117 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [e9cf83e1-5c60-4963-accd-5588050ea717] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003929512s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-363627 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-363627 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-363627 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-363627 apply -f testdata/storage-provisioner/pod.yaml
I1016 17:52:04.896404   12375 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b64606fe-3795-4d7c-b8e9-6a58224f3e95] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b64606fe-3795-4d7c-b8e9-6a58224f3e95] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003388329s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-363627 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-363627 delete -f testdata/storage-provisioner/pod.yaml
E1016 17:52:15.191868   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-363627 delete -f testdata/storage-provisioner/pod.yaml: (1.187563571s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-363627 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3600309c-09b3-4277-8d5b-612ab6788a82] Pending
helpers_test.go:352: "sp-pod" [3600309c-09b3-4277-8d5b-612ab6788a82] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.006313691s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-363627 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh -n functional-363627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cp functional-363627:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2253464996/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh -n functional-363627 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh -n functional-363627 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-363627 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qqhbc" [ecdd34bd-9b27-46e1-b71c-29e1b31f7e25] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/10/16 17:52:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-qqhbc" [ecdd34bd-9b27-46e1-b71c-29e1b31f7e25] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004051208s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-363627 exec mysql-5bb876957f-qqhbc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-363627 exec mysql-5bb876957f-qqhbc -- mysql -ppassword -e "show databases;": exit status 1 (86.49853ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1016 17:52:39.190666   12375 retry.go:31] will retry after 767.468895ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-363627 exec mysql-5bb876957f-qqhbc -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-363627 exec mysql-5bb876957f-qqhbc -- mysql -ppassword -e "show databases;": exit status 1 (90.422626ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1016 17:52:40.049237   12375 retry.go:31] will retry after 2.047441179s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-363627 exec mysql-5bb876957f-qqhbc -- mysql -ppassword -e "show databases;"
E1016 17:53:37.113709   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:55:53.251787   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 17:56:20.955387   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:00:53.251904   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (19.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12375/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /etc/test/nested/copy/12375/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12375.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /etc/ssl/certs/12375.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12375.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /usr/share/ca-certificates/12375.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/123752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /etc/ssl/certs/123752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/123752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /usr/share/ca-certificates/123752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-363627 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "sudo systemctl is-active docker": exit status 1 (321.401758ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "sudo systemctl is-active containerd": exit status 1 (299.122785ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-363627 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-363627 image ls --format table --alsologtostderr:
I1016 17:52:32.156852   52213 out.go:360] Setting OutFile to fd 1 ...
I1016 17:52:32.157096   52213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.157106   52213 out.go:374] Setting ErrFile to fd 2...
I1016 17:52:32.157110   52213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.157277   52213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
I1016 17:52:32.157851   52213 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.157938   52213 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.158337   52213 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
I1016 17:52:32.179139   52213 ssh_runner.go:195] Run: systemctl --version
I1016 17:52:32.179199   52213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
I1016 17:52:32.200038   52213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
I1016 17:52:32.299740   52213 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 image ls --format json --alsologtostderr: (1.889802009s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-363627 image ls --format json --alsologtostderr:
[{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f
8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:7117033093
6954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["regis
try.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3e
d1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"2492299
37"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-363627 image ls --format json --alsologtostderr:
I1016 17:52:30.279805   52112 out.go:360] Setting OutFile to fd 1 ...
I1016 17:52:30.280097   52112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:30.280110   52112 out.go:374] Setting ErrFile to fd 2...
I1016 17:52:30.280117   52112 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:30.280412   52112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
I1016 17:52:30.281190   52112 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:30.281318   52112 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:30.281879   52112 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
I1016 17:52:30.303313   52112 ssh_runner.go:195] Run: systemctl --version
I1016 17:52:30.303361   52112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
I1016 17:52:30.325960   52112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
I1016 17:52:30.432186   52112 ssh_runner.go:195] Run: sudo crictl images --output json
I1016 17:52:32.106195   52112 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.673964288s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-363627 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-363627 image ls --format yaml --alsologtostderr:
I1016 17:52:32.382627   52294 out.go:360] Setting OutFile to fd 1 ...
I1016 17:52:32.382881   52294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.382889   52294 out.go:374] Setting ErrFile to fd 2...
I1016 17:52:32.382893   52294 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.383126   52294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
I1016 17:52:32.383677   52294 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.383774   52294 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.384111   52294 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
I1016 17:52:32.403308   52294 ssh_runner.go:195] Run: systemctl --version
I1016 17:52:32.403359   52294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
I1016 17:52:32.421135   52294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
I1016 17:52:32.517471   52294 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh pgrep buildkitd: exit status 1 (267.779067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image build -t localhost/my-image:functional-363627 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 image build -t localhost/my-image:functional-363627 testdata/build --alsologtostderr: (2.285032406s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-363627 image build -t localhost/my-image:functional-363627 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> df9fbe0accb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-363627
--> b4f2e098c2f
Successfully tagged localhost/my-image:functional-363627
b4f2e098c2feaa5229863e366db07ae584ef5a21a08a1c0b34c981f0f06ec285
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-363627 image build -t localhost/my-image:functional-363627 testdata/build --alsologtostderr:
I1016 17:52:32.868515   52514 out.go:360] Setting OutFile to fd 1 ...
I1016 17:52:32.868682   52514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.868693   52514 out.go:374] Setting ErrFile to fd 2...
I1016 17:52:32.868699   52514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1016 17:52:32.868916   52514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
I1016 17:52:32.869505   52514 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.870212   52514 config.go:182] Loaded profile config "functional-363627": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1016 17:52:32.870614   52514 cli_runner.go:164] Run: docker container inspect functional-363627 --format={{.State.Status}}
I1016 17:52:32.889211   52514 ssh_runner.go:195] Run: systemctl --version
I1016 17:52:32.889278   52514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-363627
I1016 17:52:32.908116   52514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/functional-363627/id_rsa Username:docker}
I1016 17:52:33.006656   52514 build_images.go:161] Building image from path: /tmp/build.810708446.tar
I1016 17:52:33.006761   52514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1016 17:52:33.015762   52514 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.810708446.tar
I1016 17:52:33.020205   52514 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.810708446.tar: stat -c "%s %y" /var/lib/minikube/build/build.810708446.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.810708446.tar': No such file or directory
I1016 17:52:33.020233   52514 ssh_runner.go:362] scp /tmp/build.810708446.tar --> /var/lib/minikube/build/build.810708446.tar (3072 bytes)
I1016 17:52:33.041457   52514 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.810708446
I1016 17:52:33.050703   52514 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.810708446 -xf /var/lib/minikube/build/build.810708446.tar
I1016 17:52:33.059919   52514 crio.go:315] Building image: /var/lib/minikube/build/build.810708446
I1016 17:52:33.059986   52514 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-363627 /var/lib/minikube/build/build.810708446 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1016 17:52:35.082812   52514 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-363627 /var/lib/minikube/build/build.810708446 --cgroup-manager=cgroupfs: (2.022799825s)
I1016 17:52:35.082876   52514 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.810708446
I1016 17:52:35.091253   52514 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.810708446.tar
I1016 17:52:35.099266   52514 build_images.go:217] Built localhost/my-image:functional-363627 from /tmp/build.810708446.tar
I1016 17:52:35.099299   52514 build_images.go:133] succeeded building to: functional-363627
I1016 17:52:35.099303   52514 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-363627
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 46164: os: process already finished
helpers_test.go:519: unable to terminate pid 45905: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-363627 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4c1e0dc1-b6ff-471d-958e-7e64611c7056] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4c1e0dc1-b6ff-471d-958e-7e64611c7056] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003349507s
I1016 17:52:07.972511   12375 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image rm kicbase/echo-server:functional-363627 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-363627 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.219.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-363627 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "335.747906ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.657747ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "340.659438ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "53.011318ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdany-port3664932529/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760637130332988282" to /tmp/TestFunctionalparallelMountCmdany-port3664932529/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760637130332988282" to /tmp/TestFunctionalparallelMountCmdany-port3664932529/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760637130332988282" to /tmp/TestFunctionalparallelMountCmdany-port3664932529/001/test-1760637130332988282
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.126344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:52:10.619438   12375 retry.go:31] will retry after 744.766152ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 16 17:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 16 17:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 16 17:52 test-1760637130332988282
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh cat /mount-9p/test-1760637130332988282
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-363627 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [81de3c7c-3475-4886-a08c-9bdf64013476] Pending
helpers_test.go:352: "busybox-mount" [81de3c7c-3475-4886-a08c-9bdf64013476] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [81de3c7c-3475-4886-a08c-9bdf64013476] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [81de3c7c-3475-4886-a08c-9bdf64013476] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004060419s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-363627 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdany-port3664932529/001:/mount-9p --alsologtostderr -v=1] ...
I1016 17:52:16.294146   12375 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdspecific-port3294384786/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.242568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:52:16.607928   12375 retry.go:31] will retry after 665.459516ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdspecific-port3294384786/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "sudo umount -f /mount-9p": exit status 1 (269.061924ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-363627 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdspecific-port3294384786/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T" /mount1: exit status 1 (330.802893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1016 17:52:18.609397   12375 retry.go:31] will retry after 603.99404ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-363627 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-363627 /tmp/TestFunctionalparallelMountCmdVerifyCleanup846723355/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 service list: (1.695092942s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-363627 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-363627 service list -o json: (1.695042735s)
functional_test.go:1504: Took "1.695143055s" to run "out/minikube-linux-amd64 -p functional-363627 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-363627
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-363627
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-363627
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (149.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m29.052576362s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (149.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 kubectl -- rollout status deployment/busybox: (3.915829561s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-bsnks -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-kcq5p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-qmz88 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-bsnks -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-kcq5p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-qmz88 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-bsnks -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-kcq5p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-qmz88 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-bsnks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-bsnks -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-kcq5p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-kcq5p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-qmz88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 kubectl -- exec busybox-7b57f96db7-qmz88 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 node add --alsologtostderr -v 5: (53.446912459s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-782265 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp testdata/cp-test.txt ha-782265:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1052991701/001/cp-test_ha-782265.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265:/home/docker/cp-test.txt ha-782265-m02:/home/docker/cp-test_ha-782265_ha-782265-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test_ha-782265_ha-782265-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265:/home/docker/cp-test.txt ha-782265-m03:/home/docker/cp-test_ha-782265_ha-782265-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test_ha-782265_ha-782265-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265:/home/docker/cp-test.txt ha-782265-m04:/home/docker/cp-test_ha-782265_ha-782265-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test_ha-782265_ha-782265-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp testdata/cp-test.txt ha-782265-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1052991701/001/cp-test_ha-782265-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m02:/home/docker/cp-test.txt ha-782265:/home/docker/cp-test_ha-782265-m02_ha-782265.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test_ha-782265-m02_ha-782265.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m02:/home/docker/cp-test.txt ha-782265-m03:/home/docker/cp-test_ha-782265-m02_ha-782265-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test_ha-782265-m02_ha-782265-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m02:/home/docker/cp-test.txt ha-782265-m04:/home/docker/cp-test_ha-782265-m02_ha-782265-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test_ha-782265-m02_ha-782265-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp testdata/cp-test.txt ha-782265-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1052991701/001/cp-test_ha-782265-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m03:/home/docker/cp-test.txt ha-782265:/home/docker/cp-test_ha-782265-m03_ha-782265.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test_ha-782265-m03_ha-782265.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m03:/home/docker/cp-test.txt ha-782265-m02:/home/docker/cp-test_ha-782265-m03_ha-782265-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test_ha-782265-m03_ha-782265-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m03:/home/docker/cp-test.txt ha-782265-m04:/home/docker/cp-test_ha-782265-m03_ha-782265-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test.txt"
E1016 18:05:53.251632   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test_ha-782265-m03_ha-782265-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp testdata/cp-test.txt ha-782265-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1052991701/001/cp-test_ha-782265-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m04:/home/docker/cp-test.txt ha-782265:/home/docker/cp-test_ha-782265-m04_ha-782265.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265 "sudo cat /home/docker/cp-test_ha-782265-m04_ha-782265.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m04:/home/docker/cp-test.txt ha-782265-m02:/home/docker/cp-test_ha-782265-m04_ha-782265-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m02 "sudo cat /home/docker/cp-test_ha-782265-m04_ha-782265-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 cp ha-782265-m04:/home/docker/cp-test.txt ha-782265-m03:/home/docker/cp-test_ha-782265-m04_ha-782265-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 ssh -n ha-782265-m03 "sudo cat /home/docker/cp-test_ha-782265-m04_ha-782265-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 node stop m02 --alsologtostderr -v 5: (19.072983167s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5: exit status 7 (697.49206ms)

                                                
                                                
-- stdout --
	ha-782265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-782265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:06:16.854411   76849 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:06:16.854785   76849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:06:16.854799   76849 out.go:374] Setting ErrFile to fd 2...
	I1016 18:06:16.854807   76849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:06:16.855106   76849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:06:16.855435   76849 out.go:368] Setting JSON to false
	I1016 18:06:16.855467   76849 mustload.go:65] Loading cluster: ha-782265
	I1016 18:06:16.855588   76849 notify.go:220] Checking for updates...
	I1016 18:06:16.856086   76849 config.go:182] Loaded profile config "ha-782265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:06:16.856110   76849 status.go:174] checking status of ha-782265 ...
	I1016 18:06:16.856615   76849 cli_runner.go:164] Run: docker container inspect ha-782265 --format={{.State.Status}}
	I1016 18:06:16.876899   76849 status.go:371] ha-782265 host status = "Running" (err=<nil>)
	I1016 18:06:16.876923   76849 host.go:66] Checking if "ha-782265" exists ...
	I1016 18:06:16.877172   76849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-782265
	I1016 18:06:16.896339   76849 host.go:66] Checking if "ha-782265" exists ...
	I1016 18:06:16.896597   76849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:06:16.896667   76849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-782265
	I1016 18:06:16.915451   76849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/ha-782265/id_rsa Username:docker}
	I1016 18:06:17.011456   76849 ssh_runner.go:195] Run: systemctl --version
	I1016 18:06:17.017940   76849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:06:17.030434   76849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:06:17.090866   76849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-16 18:06:17.079827977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:06:17.091380   76849 kubeconfig.go:125] found "ha-782265" server: "https://192.168.49.254:8443"
	I1016 18:06:17.091409   76849 api_server.go:166] Checking apiserver status ...
	I1016 18:06:17.091440   76849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:06:17.103582   76849 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup
	W1016 18:06:17.112466   76849 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1240/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:06:17.112545   76849 ssh_runner.go:195] Run: ls
	I1016 18:06:17.116587   76849 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 18:06:17.120786   76849 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 18:06:17.120812   76849 status.go:463] ha-782265 apiserver status = Running (err=<nil>)
	I1016 18:06:17.120823   76849 status.go:176] ha-782265 status: &{Name:ha-782265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:06:17.120844   76849 status.go:174] checking status of ha-782265-m02 ...
	I1016 18:06:17.121126   76849 cli_runner.go:164] Run: docker container inspect ha-782265-m02 --format={{.State.Status}}
	I1016 18:06:17.140809   76849 status.go:371] ha-782265-m02 host status = "Stopped" (err=<nil>)
	I1016 18:06:17.140832   76849 status.go:384] host is not running, skipping remaining checks
	I1016 18:06:17.140838   76849 status.go:176] ha-782265-m02 status: &{Name:ha-782265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:06:17.140857   76849 status.go:174] checking status of ha-782265-m03 ...
	I1016 18:06:17.141120   76849 cli_runner.go:164] Run: docker container inspect ha-782265-m03 --format={{.State.Status}}
	I1016 18:06:17.161650   76849 status.go:371] ha-782265-m03 host status = "Running" (err=<nil>)
	I1016 18:06:17.161674   76849 host.go:66] Checking if "ha-782265-m03" exists ...
	I1016 18:06:17.161934   76849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-782265-m03
	I1016 18:06:17.181415   76849 host.go:66] Checking if "ha-782265-m03" exists ...
	I1016 18:06:17.181676   76849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:06:17.181727   76849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-782265-m03
	I1016 18:06:17.200352   76849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/ha-782265-m03/id_rsa Username:docker}
	I1016 18:06:17.296401   76849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:06:17.309331   76849 kubeconfig.go:125] found "ha-782265" server: "https://192.168.49.254:8443"
	I1016 18:06:17.309357   76849 api_server.go:166] Checking apiserver status ...
	I1016 18:06:17.309388   76849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:06:17.321071   76849 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup
	W1016 18:06:17.329854   76849 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:06:17.329915   76849 ssh_runner.go:195] Run: ls
	I1016 18:06:17.334012   76849 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1016 18:06:17.338239   76849 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1016 18:06:17.338261   76849 status.go:463] ha-782265-m03 apiserver status = Running (err=<nil>)
	I1016 18:06:17.338269   76849 status.go:176] ha-782265-m03 status: &{Name:ha-782265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:06:17.338294   76849 status.go:174] checking status of ha-782265-m04 ...
	I1016 18:06:17.338549   76849 cli_runner.go:164] Run: docker container inspect ha-782265-m04 --format={{.State.Status}}
	I1016 18:06:17.357622   76849 status.go:371] ha-782265-m04 host status = "Running" (err=<nil>)
	I1016 18:06:17.357643   76849 host.go:66] Checking if "ha-782265-m04" exists ...
	I1016 18:06:17.357890   76849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-782265-m04
	I1016 18:06:17.376122   76849 host.go:66] Checking if "ha-782265-m04" exists ...
	I1016 18:06:17.376352   76849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:06:17.376383   76849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-782265-m04
	I1016 18:06:17.394971   76849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/ha-782265-m04/id_rsa Username:docker}
	I1016 18:06:17.490980   76849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:06:17.503360   76849 status.go:176] ha-782265-m04 status: &{Name:ha-782265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 node start m02 --alsologtostderr -v 5: (8.029474023s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 stop --alsologtostderr -v 5
E1016 18:06:57.922354   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:57.928774   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:57.940195   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:57.961593   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:58.003113   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:58.086856   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:58.248410   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:58.570301   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:06:59.211940   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:07:00.494060   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:07:03.055954   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:07:08.178081   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:07:16.319127   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:07:18.419355   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 stop --alsologtostderr -v 5: (50.564077261s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 start --wait true --alsologtostderr -v 5
E1016 18:07:38.901161   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 start --wait true --alsologtostderr -v 5: (54.109077047s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node delete m03 --alsologtostderr -v 5
E1016 18:08:19.862884   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 node delete m03 --alsologtostderr -v 5: (9.777961811s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 stop --alsologtostderr -v 5: (42.072301805s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5: exit status 7 (109.281588ms)

                                                
                                                
-- stdout --
	ha-782265
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-782265-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:09:06.265500   90798 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:09:06.265753   90798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:09:06.265762   90798 out.go:374] Setting ErrFile to fd 2...
	I1016 18:09:06.265767   90798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:09:06.265963   90798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:09:06.266134   90798 out.go:368] Setting JSON to false
	I1016 18:09:06.266170   90798 mustload.go:65] Loading cluster: ha-782265
	I1016 18:09:06.266226   90798 notify.go:220] Checking for updates...
	I1016 18:09:06.266567   90798 config.go:182] Loaded profile config "ha-782265": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:09:06.266583   90798 status.go:174] checking status of ha-782265 ...
	I1016 18:09:06.266994   90798 cli_runner.go:164] Run: docker container inspect ha-782265 --format={{.State.Status}}
	I1016 18:09:06.288590   90798 status.go:371] ha-782265 host status = "Stopped" (err=<nil>)
	I1016 18:09:06.288611   90798 status.go:384] host is not running, skipping remaining checks
	I1016 18:09:06.288617   90798 status.go:176] ha-782265 status: &{Name:ha-782265 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:09:06.288652   90798 status.go:174] checking status of ha-782265-m02 ...
	I1016 18:09:06.288951   90798 cli_runner.go:164] Run: docker container inspect ha-782265-m02 --format={{.State.Status}}
	I1016 18:09:06.307767   90798 status.go:371] ha-782265-m02 host status = "Stopped" (err=<nil>)
	I1016 18:09:06.307813   90798 status.go:384] host is not running, skipping remaining checks
	I1016 18:09:06.307824   90798 status.go:176] ha-782265-m02 status: &{Name:ha-782265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:09:06.307849   90798 status.go:174] checking status of ha-782265-m04 ...
	I1016 18:09:06.308121   90798 cli_runner.go:164] Run: docker container inspect ha-782265-m04 --format={{.State.Status}}
	I1016 18:09:06.326464   90798 status.go:371] ha-782265-m04 host status = "Stopped" (err=<nil>)
	I1016 18:09:06.326498   90798 status.go:384] host is not running, skipping remaining checks
	I1016 18:09:06.326510   90798 status.go:176] ha-782265-m04 status: &{Name:ha-782265-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1016 18:09:41.785012   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (51.928294965s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 node add --control-plane --alsologtostderr -v 5
E1016 18:10:53.250969   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-782265 node add --control-plane --alsologtostderr -v 5: (1m16.10058167s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-782265 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-088519 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1016 18:11:57.921671   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-088519 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.859702922s)
--- PASS: TestJSONOutput/start/Command (38.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-088519 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-088519 --output=json --user=testUser: (6.006475817s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-733503 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-733503 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.273566ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40b0712c-10ab-46b8-ba01-d799fb975fa1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-733503] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8264964f-e1a8-43e8-b4e2-d7a7b2b6f0ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21738"}}
	{"specversion":"1.0","id":"76fcf94d-ad26-4fd2-895d-bd0856a7ebdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d825d0a-1433-4986-8e06-3e0fa1e608f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig"}}
	{"specversion":"1.0","id":"382e35d5-6eef-4bb0-8486-02cff407d961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube"}}
	{"specversion":"1.0","id":"3ac4207f-a5a2-46d3-b86d-9933495227dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e926b096-942b-4cdb-b8a9-dfdd9746dfd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"faf21d5b-6d25-4a1f-b392-f47a139a24f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-733503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-733503
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-177082 --network=
E1016 18:12:25.628671   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-177082 --network=: (27.859783271s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-177082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-177082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-177082: (2.191860326s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-084036 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-084036 --network=bridge: (23.095184471s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-084036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-084036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-084036: (2.047848003s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.16s)

                                                
                                    
x
+
TestKicExistingNetwork (25.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1016 18:13:14.054880   12375 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1016 18:13:14.073096   12375 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1016 18:13:14.073176   12375 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1016 18:13:14.073195   12375 cli_runner.go:164] Run: docker network inspect existing-network
W1016 18:13:14.090936   12375 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1016 18:13:14.090976   12375 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1016 18:13:14.090993   12375 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1016 18:13:14.091125   12375 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1016 18:13:14.111273   12375 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e6b487beca69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:46:43:25:0f:93} reservation:<nil>}
I1016 18:13:14.111661   12375 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00034b5f0}
I1016 18:13:14.111688   12375 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1016 18:13:14.111759   12375 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1016 18:13:14.173580   12375 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-603479 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-603479 --network=existing-network: (23.681773938s)
helpers_test.go:175: Cleaning up "existing-network-603479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-603479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-603479: (2.018888048s)
I1016 18:13:39.892316   12375 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.86s)

                                                
                                    
x
+
TestKicCustomSubnet (24.91s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-028531 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-028531 --subnet=192.168.60.0/24: (22.722599803s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-028531 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-028531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-028531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-028531: (2.165045402s)
--- PASS: TestKicCustomSubnet (24.91s)

                                                
                                    
x
+
TestKicStaticIP (24.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-810839 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-810839 --static-ip=192.168.200.200: (21.786743068s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-810839 ip
helpers_test.go:175: Cleaning up "static-ip-810839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-810839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-810839: (2.190269525s)
--- PASS: TestKicStaticIP (24.11s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-677740 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-677740 --driver=docker  --container-runtime=crio: (21.162238875s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-680416 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-680416 --driver=docker  --container-runtime=crio: (22.173356882s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-677740
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-680416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-680416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-680416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-680416: (2.395408126s)
helpers_test.go:175: Cleaning up "first-677740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-677740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-677740: (2.372253364s)
--- PASS: TestMinikubeProfile (49.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-999773 --memory=3072 --mount-string /tmp/TestMountStartserial4092056101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-999773 --memory=3072 --mount-string /tmp/TestMountStartserial4092056101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.22978702s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-999773 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-015883 --memory=3072 --mount-string /tmp/TestMountStartserial4092056101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-015883 --memory=3072 --mount-string /tmp/TestMountStartserial4092056101/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.548857382s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-015883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-999773 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-999773 --alsologtostderr -v=5: (1.69536382s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-015883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-015883
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-015883: (1.237996462s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-015883
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-015883: (6.273112017s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-015883 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-028404 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1016 18:15:53.251643   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-028404 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.779384519s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-028404 -- rollout status deployment/busybox: (2.312147645s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-q67bp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-qng9b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-q67bp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-qng9b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-q67bp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-qng9b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-q67bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-q67bp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-qng9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-028404 -- exec busybox-7b57f96db7-qng9b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-028404 -v=5 --alsologtostderr
E1016 18:16:57.917482   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-028404 -v=5 --alsologtostderr: (24.140767353s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-028404 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp testdata/cp-test.txt multinode-028404:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2570967531/001/cp-test_multinode-028404.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404:/home/docker/cp-test.txt multinode-028404-m02:/home/docker/cp-test_multinode-028404_multinode-028404-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test_multinode-028404_multinode-028404-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404:/home/docker/cp-test.txt multinode-028404-m03:/home/docker/cp-test_multinode-028404_multinode-028404-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test_multinode-028404_multinode-028404-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp testdata/cp-test.txt multinode-028404-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2570967531/001/cp-test_multinode-028404-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m02:/home/docker/cp-test.txt multinode-028404:/home/docker/cp-test_multinode-028404-m02_multinode-028404.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test_multinode-028404-m02_multinode-028404.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m02:/home/docker/cp-test.txt multinode-028404-m03:/home/docker/cp-test_multinode-028404-m02_multinode-028404-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test_multinode-028404-m02_multinode-028404-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp testdata/cp-test.txt multinode-028404-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2570967531/001/cp-test_multinode-028404-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m03:/home/docker/cp-test.txt multinode-028404:/home/docker/cp-test_multinode-028404-m03_multinode-028404.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404 "sudo cat /home/docker/cp-test_multinode-028404-m03_multinode-028404.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 cp multinode-028404-m03:/home/docker/cp-test.txt multinode-028404-m02:/home/docker/cp-test_multinode-028404-m03_multinode-028404-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 ssh -n multinode-028404-m02 "sudo cat /home/docker/cp-test_multinode-028404-m03_multinode-028404-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-028404 node stop m03: (1.263694702s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-028404 status: exit status 7 (515.00862ms)

                                                
                                                
-- stdout --
	multinode-028404
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-028404-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-028404-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr: exit status 7 (491.975725ms)

                                                
                                                
-- stdout --
	multinode-028404
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-028404-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-028404-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:17:26.642688  150336 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:17:26.643147  150336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:17:26.643156  150336 out.go:374] Setting ErrFile to fd 2...
	I1016 18:17:26.643160  150336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:17:26.643372  150336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:17:26.643536  150336 out.go:368] Setting JSON to false
	I1016 18:17:26.643562  150336 mustload.go:65] Loading cluster: multinode-028404
	I1016 18:17:26.643753  150336 notify.go:220] Checking for updates...
	I1016 18:17:26.644002  150336 config.go:182] Loaded profile config "multinode-028404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:17:26.644018  150336 status.go:174] checking status of multinode-028404 ...
	I1016 18:17:26.644576  150336 cli_runner.go:164] Run: docker container inspect multinode-028404 --format={{.State.Status}}
	I1016 18:17:26.663252  150336 status.go:371] multinode-028404 host status = "Running" (err=<nil>)
	I1016 18:17:26.663274  150336 host.go:66] Checking if "multinode-028404" exists ...
	I1016 18:17:26.663521  150336 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-028404
	I1016 18:17:26.681988  150336 host.go:66] Checking if "multinode-028404" exists ...
	I1016 18:17:26.682325  150336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:17:26.682390  150336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-028404
	I1016 18:17:26.700318  150336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/multinode-028404/id_rsa Username:docker}
	I1016 18:17:26.796257  150336 ssh_runner.go:195] Run: systemctl --version
	I1016 18:17:26.802688  150336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:17:26.816052  150336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:17:26.874016  150336 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-16 18:17:26.863357908 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:17:26.874602  150336 kubeconfig.go:125] found "multinode-028404" server: "https://192.168.67.2:8443"
	I1016 18:17:26.874637  150336 api_server.go:166] Checking apiserver status ...
	I1016 18:17:26.874682  150336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1016 18:17:26.886702  150336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup
	W1016 18:17:26.895366  150336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1016 18:17:26.895416  150336 ssh_runner.go:195] Run: ls
	I1016 18:17:26.899224  150336 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1016 18:17:26.904483  150336 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1016 18:17:26.904509  150336 status.go:463] multinode-028404 apiserver status = Running (err=<nil>)
	I1016 18:17:26.904518  150336 status.go:176] multinode-028404 status: &{Name:multinode-028404 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:17:26.904534  150336 status.go:174] checking status of multinode-028404-m02 ...
	I1016 18:17:26.904823  150336 cli_runner.go:164] Run: docker container inspect multinode-028404-m02 --format={{.State.Status}}
	I1016 18:17:26.923121  150336 status.go:371] multinode-028404-m02 host status = "Running" (err=<nil>)
	I1016 18:17:26.923147  150336 host.go:66] Checking if "multinode-028404-m02" exists ...
	I1016 18:17:26.923459  150336 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-028404-m02
	I1016 18:17:26.941820  150336 host.go:66] Checking if "multinode-028404-m02" exists ...
	I1016 18:17:26.942077  150336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1016 18:17:26.942111  150336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-028404-m02
	I1016 18:17:26.960545  150336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21738-8849/.minikube/machines/multinode-028404-m02/id_rsa Username:docker}
	I1016 18:17:27.056103  150336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1016 18:17:27.068960  150336 status.go:176] multinode-028404-m02 status: &{Name:multinode-028404-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:17:27.069003  150336 status.go:174] checking status of multinode-028404-m03 ...
	I1016 18:17:27.069268  150336 cli_runner.go:164] Run: docker container inspect multinode-028404-m03 --format={{.State.Status}}
	I1016 18:17:27.088153  150336 status.go:371] multinode-028404-m03 host status = "Stopped" (err=<nil>)
	I1016 18:17:27.088173  150336 status.go:384] host is not running, skipping remaining checks
	I1016 18:17:27.088179  150336 status.go:176] multinode-028404-m03 status: &{Name:multinode-028404-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-028404 node start m03 -v=5 --alsologtostderr: (6.803423079s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-028404
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-028404
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-028404: (29.952476914s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-028404 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-028404 --wait=true -v=5 --alsologtostderr: (27.826393875s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-028404
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-028404 node delete m03: (4.401453874s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-028404 stop: (28.348494916s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-028404 status: exit status 7 (87.996678ms)

                                                
                                                
-- stdout --
	multinode-028404
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-028404-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr: exit status 7 (83.238576ms)

                                                
                                                
-- stdout --
	multinode-028404
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-028404-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:19:05.945093  160077 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:19:05.945183  160077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:19:05.945193  160077 out.go:374] Setting ErrFile to fd 2...
	I1016 18:19:05.945200  160077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:19:05.945440  160077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:19:05.945649  160077 out.go:368] Setting JSON to false
	I1016 18:19:05.945680  160077 mustload.go:65] Loading cluster: multinode-028404
	I1016 18:19:05.945740  160077 notify.go:220] Checking for updates...
	I1016 18:19:05.946110  160077 config.go:182] Loaded profile config "multinode-028404": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:19:05.946125  160077 status.go:174] checking status of multinode-028404 ...
	I1016 18:19:05.946570  160077 cli_runner.go:164] Run: docker container inspect multinode-028404 --format={{.State.Status}}
	I1016 18:19:05.964661  160077 status.go:371] multinode-028404 host status = "Stopped" (err=<nil>)
	I1016 18:19:05.964688  160077 status.go:384] host is not running, skipping remaining checks
	I1016 18:19:05.964697  160077 status.go:176] multinode-028404 status: &{Name:multinode-028404 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1016 18:19:05.964742  160077 status.go:174] checking status of multinode-028404-m02 ...
	I1016 18:19:05.964989  160077 cli_runner.go:164] Run: docker container inspect multinode-028404-m02 --format={{.State.Status}}
	I1016 18:19:05.982849  160077 status.go:371] multinode-028404-m02 host status = "Stopped" (err=<nil>)
	I1016 18:19:05.982891  160077 status.go:384] host is not running, skipping remaining checks
	I1016 18:19:05.982905  160077 status.go:176] multinode-028404-m02 status: &{Name:multinode-028404-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-028404 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-028404 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (45.022158316s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-028404 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.62s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-028404
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-028404-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-028404-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.472045ms)

                                                
                                                
-- stdout --
	* [multinode-028404-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-028404-m02' is duplicated with machine name 'multinode-028404-m02' in profile 'multinode-028404'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-028404-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-028404-m03 --driver=docker  --container-runtime=crio: (21.174838628s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-028404
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-028404: exit status 80 (284.263072ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-028404 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-028404-m03 already exists in multinode-028404-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-028404-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-028404-m03: (2.387579319s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.97s)

                                                
                                    
x
+
TestPreload (116.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-298333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1016 18:20:53.250924   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-298333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.40127803s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-298333 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-298333 image pull gcr.io/k8s-minikube/busybox: (1.497275254s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-298333
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-298333: (5.860771765s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-298333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1016 18:21:57.917489   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-298333 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (58.660942873s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-298333 image list
helpers_test.go:175: Cleaning up "test-preload-298333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-298333
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-298333: (2.411584632s)
--- PASS: TestPreload (116.05s)

                                                
                                    
x
+
TestScheduledStopUnix (96.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-946894 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-946894 --memory=3072 --driver=docker  --container-runtime=crio: (20.936781345s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946894 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-946894 -n scheduled-stop-946894
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1016 18:22:37.190308   12375 retry.go:31] will retry after 73.702µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.191500   12375 retry.go:31] will retry after 197.511µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.192665   12375 retry.go:31] will retry after 199.516µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.193838   12375 retry.go:31] will retry after 333.27µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.195035   12375 retry.go:31] will retry after 513.542µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.196216   12375 retry.go:31] will retry after 787.372µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.197418   12375 retry.go:31] will retry after 688.67µs: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.198568   12375 retry.go:31] will retry after 1.903484ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.200799   12375 retry.go:31] will retry after 2.327555ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.204032   12375 retry.go:31] will retry after 1.954287ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.206383   12375 retry.go:31] will retry after 7.237628ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.214640   12375 retry.go:31] will retry after 10.811529ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.225901   12375 retry.go:31] will retry after 11.877212ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.238231   12375 retry.go:31] will retry after 25.457769ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
I1016 18:22:37.264512   12375 retry.go:31] will retry after 30.464502ms: open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/scheduled-stop-946894/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946894 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946894 -n scheduled-stop-946894
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-946894
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-946894 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1016 18:23:20.992315   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-946894
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-946894: exit status 7 (66.254185ms)

                                                
                                                
-- stdout --
	scheduled-stop-946894
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946894 -n scheduled-stop-946894
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-946894 -n scheduled-stop-946894: exit status 7 (68.242784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-946894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-946894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-946894: (3.790562704s)
--- PASS: TestScheduledStopUnix (96.11s)

                                                
                                    
x
+
TestInsufficientStorage (9.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-114513 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1016 18:23:56.322876   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-114513 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.348936852s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8d0e9d9d-abd1-47ae-b87a-b5c77fbc6f7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-114513] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c687b152-8d99-4d99-8ece-f5bfccf04904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21738"}}
	{"specversion":"1.0","id":"f1b98dc6-f1bf-4487-a314-da7e619ef9d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"707a1898-e264-42d0-bb23-4ebd0e0d5095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig"}}
	{"specversion":"1.0","id":"0f07cb2c-001a-4b89-9d0a-8e8dfd0bda14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube"}}
	{"specversion":"1.0","id":"2f4dc115-404f-41d8-b20a-7167c751750c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"80354756-0eee-4482-b0d9-238eaffd8ee3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ca01466a-0137-4664-b7f0-139733433e07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"207da4d0-2d86-4972-98fb-f5f1d6ad4bfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8590d4f0-e14a-46f3-8cf4-c5cd5368dafc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf26b5f5-46f8-4718-8d4b-a04f84cd2f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"beba5d19-4dd7-437a-a914-476cfb688e34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-114513\" primary control-plane node in \"insufficient-storage-114513\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0854627-1c3a-44cc-aadf-68cd40d48b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760363564-21724 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f1f87ede-e0c2-43cb-aa18-4bbc279f87b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f09f2468-b7ee-4a6d-b91b-a8588eb5625f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-114513 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-114513 --output=json --layout=cluster: exit status 7 (275.536639ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-114513","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-114513","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1016 18:23:59.543248  180356 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-114513" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-114513 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-114513 --output=json --layout=cluster: exit status 7 (281.801975ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-114513","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-114513","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1016 18:23:59.825847  180484 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-114513" does not appear in /home/jenkins/minikube-integration/21738-8849/kubeconfig
	E1016 18:23:59.836307  180484 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/insufficient-storage-114513/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-114513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-114513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-114513: (1.92021712s)
--- PASS: TestInsufficientStorage (9.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1812998474 start -p running-upgrade-931818 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1812998474 start -p running-upgrade-931818 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.497454874s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-931818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-931818 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.734495658s)
helpers_test.go:175: Cleaning up "running-upgrade-931818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-931818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-931818: (2.537587711s)
--- PASS: TestRunningBinaryUpgrade (70.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (299.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.601791894s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-750025
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-750025: (1.319898675s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-750025 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-750025 status --format={{.Host}}: exit status 7 (86.89938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1016 18:26:57.917378   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/functional-363627/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.645003378s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-750025 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (95.755904ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-750025] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-750025
	    minikube start -p kubernetes-upgrade-750025 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7500252 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-750025 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-750025 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.799890869s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-750025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-750025
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-750025: (4.894788762s)
--- PASS: TestKubernetesUpgrade (299.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (77.9s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3385299078 start -p missing-upgrade-294813 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3385299078 start -p missing-upgrade-294813 --memory=3072 --driver=docker  --container-runtime=crio: (24.129648232s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-294813
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-294813: (10.45334027s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-294813
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-294813 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.316924909s)
helpers_test.go:175: Cleaning up "missing-upgrade-294813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-294813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-294813: (2.469861642s)
--- PASS: TestMissingContainerUpgrade (77.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestPause/serial/Start (61.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-388667 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-388667 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m1.1817083s)
--- PASS: TestPause/serial/Start (61.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3543394611 start -p stopped-upgrade-637548 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3543394611 start -p stopped-upgrade-637548 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.853220059s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3543394611 -p stopped-upgrade-637548 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3543394611 -p stopped-upgrade-637548 stop: (11.97525017s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-637548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-637548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.561068838s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.39s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-388667 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-388667 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.421800576s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (72.407212ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-200573] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200573 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200573 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.823446185s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200573 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-637548
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-637548: (1.514611129s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-084411 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-084411 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (190.187184ms)

                                                
                                                
-- stdout --
	* [false-084411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21738
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1016 18:25:22.661967  201788 out.go:360] Setting OutFile to fd 1 ...
	I1016 18:25:22.662252  201788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:22.662263  201788 out.go:374] Setting ErrFile to fd 2...
	I1016 18:25:22.662270  201788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1016 18:25:22.662477  201788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21738-8849/.minikube/bin
	I1016 18:25:22.663039  201788 out.go:368] Setting JSON to false
	I1016 18:25:22.664098  201788 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4071,"bootTime":1760635052,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1016 18:25:22.664200  201788 start.go:141] virtualization: kvm guest
	I1016 18:25:22.667903  201788 out.go:179] * [false-084411] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1016 18:25:22.669601  201788 out.go:179]   - MINIKUBE_LOCATION=21738
	I1016 18:25:22.669609  201788 notify.go:220] Checking for updates...
	I1016 18:25:22.671314  201788 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1016 18:25:22.672929  201788 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21738-8849/kubeconfig
	I1016 18:25:22.674502  201788 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21738-8849/.minikube
	I1016 18:25:22.676094  201788 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1016 18:25:22.680369  201788 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1016 18:25:22.682581  201788 config.go:182] Loaded profile config "NoKubernetes-200573": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:22.682738  201788 config.go:182] Loaded profile config "force-systemd-env-275318": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:22.682875  201788 config.go:182] Loaded profile config "offline-crio-747718": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1016 18:25:22.683013  201788 driver.go:421] Setting default libvirt URI to qemu:///system
	I1016 18:25:22.711458  201788 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1016 18:25:22.711538  201788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1016 18:25:22.779450  201788 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:83 SystemTime:2025-10-16 18:25:22.768230689 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1016 18:25:22.779579  201788 docker.go:318] overlay module found
	I1016 18:25:22.784324  201788 out.go:179] * Using the docker driver based on user configuration
	I1016 18:25:22.786001  201788 start.go:305] selected driver: docker
	I1016 18:25:22.786023  201788 start.go:925] validating driver "docker" against <nil>
	I1016 18:25:22.786038  201788 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1016 18:25:22.788450  201788 out.go:203] 
	W1016 18:25:22.790055  201788 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1016 18:25:22.791764  201788 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-084411 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-747718
contexts:
- context:
cluster: offline-crio-747718
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-747718
name: offline-crio-747718
current-context: ""
kind: Config
users:
- name: offline-crio-747718
user:
client-certificate: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.crt
client-key: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-084411

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-084411"

                                                
                                                
----------------------- debugLogs end: false-084411 [took: 4.583703449s] --------------------------------
helpers_test.go:175: Cleaning up "false-084411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-084411
--- PASS: TestNetworkPlugins/group/false (4.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.850400774s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-200573 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-200573 status -o json: exit status 2 (342.468038ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-200573","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-200573
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-200573: (2.242645385s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200573 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.668468837s)
--- PASS: TestNoKubernetes/serial/Start (6.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200573 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200573 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.166421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.429602691s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-200573
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-200573: (1.306679579s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-200573 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-200573 --driver=docker  --container-runtime=crio: (6.581817034s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-200573 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-200573 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.568736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.075281518s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-956814 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a0840267-3a77-4fd9-8a8f-decbfcf3849a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a0840267-3a77-4fd9-8a8f-decbfcf3849a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00353206s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-956814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.676491615s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (17.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-956814 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-956814 --alsologtostderr -v=3: (17.522427639s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (17.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814: exit status 7 (69.868471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-956814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-956814 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.315268365s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956814 -n old-k8s-version-956814
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-808539 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [92e14652-216c-4f68-9dcc-f986c05ef8d4] Pending
helpers_test.go:352: "busybox" [92e14652-216c-4f68-9dcc-f986c05ef8d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [92e14652-216c-4f68-9dcc-f986c05ef8d4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004344036s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-808539 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-808539 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-808539 --alsologtostderr -v=3: (16.195275543s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v4mf2" [30ae3852-d8ac-427d-8da1-8439a752e2d4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003116017s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-v4mf2" [30ae3852-d8ac-427d-8da1-8439a752e2d4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003492208s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-956814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539: exit status 7 (68.682891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-808539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-808539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.926187916s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-808539 -n no-preload-808539
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-956814 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.559142568s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m9.731005151s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j8f8d" [98611c84-133a-4ab8-992f-3f5889238b0e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003902434s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-063117 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2f13025d-16dc-4451-9e8d-c37732eb709a] Pending
helpers_test.go:352: "busybox" [2f13025d-16dc-4451-9e8d-c37732eb709a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2f13025d-16dc-4451-9e8d-c37732eb709a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005117502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-063117 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j8f8d" [98611c84-133a-4ab8-992f-3f5889238b0e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004070121s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-808539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-808539 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-063117 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-063117 --alsologtostderr -v=3: (18.090983367s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.647604222s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117: exit status 7 (73.016419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-063117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-063117 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.478227221s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-063117 -n embed-certs-063117
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e96222c9-4604-49ac-a0f7-6328bfe2f82a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e96222c9-4604-49ac-a0f7-6328bfe2f82a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003653655s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-794682 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-794682 --alsologtostderr -v=3: (3.37474944s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682: exit status 7 (88.353326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-794682 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-794682 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.830191143s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-794682 -n newest-cni-794682
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-523257 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-523257 --alsologtostderr -v=3: (18.478354443s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-794682 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.401414687s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257: exit status 7 (70.120052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-523257 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-523257 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.772865189s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523257 -n default-k8s-diff-port-523257
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tlp4f" [28398c65-3e03-41f6-98a9-0e25b57ac960] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004660535s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tlp4f" [28398c65-3e03-41f6-98a9-0e25b57ac960] Running
E1016 18:30:53.251254   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/addons-431183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003819613s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-063117 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-063117 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.87978802s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.173881645s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-084411 "pgrep -a kubelet"
I1016 18:31:24.796745   12375 config.go:182] Loaded profile config "auto-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-44p2h" [7c22b6b5-ef63-4744-8a84-9762b9c4a19a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-44p2h" [7c22b6b5-ef63-4744-8a84-9762b9c4a19a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004520158s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h7jqr" [c460252e-19fc-4ced-9611-9dd24ea4484c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004345973s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h7jqr" [c460252e-19fc-4ced-9611-9dd24ea4484c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004497445s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-523257 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-523257 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6vz7h" [795a265b-6024-45fb-9abe-980ef02ac7e0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00358234s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.897888783s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-084411 "pgrep -a kubelet"
I1016 18:32:00.727275   12375 config.go:182] Loaded profile config "kindnet-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6tkqp" [a71fefda-3069-47f2-a2ff-b55c258d2b53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6tkqp" [a71fefda-3069-47f2-a2ff-b55c258d2b53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005053867s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.794991147s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hnfsl" [cc397569-f810-402e-b509-ba095330f2af] Running
E1016 18:32:08.352976   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.360483   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.372000   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.393483   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.435429   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.516884   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:08.678491   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:09.000311   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:09.642708   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:32:10.924457   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005806046s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-084411 "pgrep -a kubelet"
I1016 18:32:13.022484   12375 config.go:182] Loaded profile config "calico-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8chks" [08b30ff7-7c8b-4515-a6dc-cee789753d79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1016 18:32:13.486778   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-8chks" [08b30ff7-7c8b-4515-a6dc-cee789753d79] Running
E1016 18:32:18.608148   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003881822s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.808485256s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-084411 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t9k4p" [ccb87fc5-5d88-44e0-b332-d24fbbb42ff1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t9k4p" [ccb87fc5-5d88-44e0-b332-d24fbbb42ff1] Running
E1016 18:32:49.331763   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004771944s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-084411 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.542581071s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-084411 "pgrep -a kubelet"
I1016 18:32:49.731368   12375 config.go:182] Loaded profile config "custom-flannel-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lj6s5" [fd4d36bf-24ac-4e5e-a50c-3e2afe10f5ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lj6s5" [fd4d36bf-24ac-4e5e-a50c-3e2afe10f5ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005014181s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dpwj8" [5a5f7192-6360-40e8-a61b-396a54df7197] Running
E1016 18:33:21.443859   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003987928s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-084411 "pgrep -a kubelet"
I1016 18:33:26.510023   12375 config.go:182] Loaded profile config "flannel-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vs692" [12c6f91e-42c4-4312-9d20-0c9c00104cba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vs692" [12c6f91e-42c4-4312-9d20-0c9c00104cba] Running
E1016 18:33:30.294003   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/old-k8s-version-956814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1016 18:33:31.685624   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003451382s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-084411 "pgrep -a kubelet"
E1016 18:33:52.167947   12375 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/no-preload-808539/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1016 18:33:52.232454   12375 config.go:182] Loaded profile config "bridge-084411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (7.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-084411 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nm54f" [61f477e4-1ba1-458c-ba2d-6d91de51b41f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nm54f" [61f477e4-1ba1-458c-ba2d-6d91de51b41f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 7.003816024s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (7.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-084411 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-084411 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-246527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-246527
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-084411 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-747718
contexts:
- context:
cluster: offline-crio-747718
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-747718
name: offline-crio-747718
current-context: ""
kind: Config
users:
- name: offline-crio-747718
user:
client-certificate: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.crt
client-key: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-084411

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-084411"

                                                
                                                
----------------------- debugLogs end: kubenet-084411 [took: 3.491966633s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-084411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-084411
--- SKIP: TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-084411 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-084411" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21738-8849/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-crio-747718
contexts:
- context:
cluster: offline-crio-747718
extensions:
- extension:
last-update: Thu, 16 Oct 2025 18:24:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-747718
name: offline-crio-747718
current-context: ""
kind: Config
users:
- name: offline-crio-747718
user:
client-certificate: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.crt
client-key: /home/jenkins/minikube-integration/21738-8849/.minikube/profiles/offline-crio-747718/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-084411

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-084411" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-084411"

                                                
                                                
----------------------- debugLogs end: cilium-084411 [took: 3.685931216s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-084411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-084411
--- SKIP: TestNetworkPlugins/group/cilium (3.90s)

                                                
                                    
Copied to clipboard